News

TOP500 Meanderings: InfiniBand Fends Off Supercomputing Challengers

None
Dec. 12, 2017

By: Michael Feldman

Ethernet remains the most popular interconnect on the TOP500 list, but InfiniBand still rules the roost when it comes to true supercomputing performance. We run the numbers and show how InfiniBand still dominates the top supercomputers in the world.

On the November TOP500 rankings, 228 systems were hooked together with Ethernet technology of one sort or another – 100G, 40G, 25G, 10G, or Gigabit. InfiniBand came in a distant second with 163 systems. But, as has been noted in the past, the vast majority of the Ethernet systems – more than 90 percent in the latest list -- are owned by Internet or cloud service providers, web content distributors, telecom firms, and other undefined IT businesses. These are most likely running commercial workloads of various sorts that have little to do with high performance computing.

About three-quarters of these Ethernet-based systems are installed in China, a country which has been particularly enthusiastic at running Linpack on all sorts of machinery to boost its TOP500 presence. On the current list, China has 202 systems that made the cut; the US trails with just 143.

Even taking the entire contingent of 202 Ethernet systems into account, their total aggregate performance is a relatively modest 185 petaflops (out of total of 845 petaflops for all 500 supercomputers). Meanwhile, the 163 InfiniBand-based systems on the list deliver nearly 314 petaflops, 93 of which are attributed to the current number one system, TaihuLight. Although, TaihuLight’s interconnect is officially listed as a custom network, the underlying componentry is Mellanox InfiniBand. Even ignoring that outlier, InfiniBand still beats Ethernet in total aggregate performance by a good margin.

The only major challenger to InfiniBand in the “true” supercomputing space is Intel’s 100 Gbps Omni-Path fabric. On the current TOP500 list, it claims 35 systems, representing a respectable 80 petaflops. While that’s still a distant second to InfiniBand, the numbers even out if you compare Omni-Path to InfiniBand’s 100 Gbps technology, namely EDR. The latter represents 41 systems and 73.8 petaflops.

Here it’s worth noting that Cray’s Aries interconnect is also found in 41 systems on the list. Since all of those systems are relatively large Cray XC supercomputers, their total aggregate performance is an impressive 132.6 petaflops. In other words, Aries, with just 8 percent of the supercomputers, claims more than 15 percent of the list’s performance.

Aries, however, will not have a sequel. Cray sold the intellectual property upon which Aries was based to Intel in 2012, and that technology was subsequently incorporated into the Omni-Path products. Looking ahead to Cray’s next-generation “Shasta” supercomputers, the company intends to support the second-generation Omni-Path fabric, along with other types of networks. Since the Shasta platform will be the successor to both the XC line of supercomputers and CS cluster line, one can assume that Cray will be supporting both InfiniBand and Ethernet.

Peering into the future, InfiniBand looks like it will maintain its dominant position in the HPC space, at least for the next few years. Mellanox is planning to introduce its HDR InfiniBand ConnectX-6 adapters and Quantum switches in 2018, which will provide 200 Gbps point-to-point connectivity. Although Mellanox originally planned to begin rolling out HDR in 2017, the absence of a 200 Gbps competitor and the constraints of PCIe gen3 bandwidth on the server made 2018 a more opportune timeframe.

The Department of Energy’s Summit and Sierra supercomputers will be outfitted with EDR, which will add more than 300 petaflops under the InfiniBand banner in 2018. Those two systems just missed the opportunity to get HDR, which would have worked out quite well, inasmuch as both machines are powered by the PCIe gen4-capable Power9 processors. In any case, we should expect to see at least a handful of HDR installations on the TOP500 list in the coming year.

Presumably, Omni-Path 2.0 will supply 200 Gbps connectivity, but Intel is saying its second-generation fabric won’t appear until 2019, at the earliest. That leaves at least a one-year window of opportunity for Mellanox to establish unchallenged bandwidth superiority in the HPC space. Ethernet will also reach 200G speeds in the next year or two, but probably after HDR InfiniBand hits the streets. The first such Ethernet products could be supplied by Mellanox itself, since the company is able to reuse the same underlying technology developed for HDR. Nevertheless, InfiniBand tends to have lower latencies than Ethernet at comparable data rates, so will be the preferred choice for many, if not most, HPC customers.

Mellanox has been particularly adept at keeping its InfiniBand technology at the leading edge of performance, while at the same time adding useful features, like in-network computing and multi-host adapters, for performance-demanding users and other scale-out customers. It’s accomplished this despite competition from much larger and more established networking companies. At some point, Ethernet bandwidth and latency numbers might sync up with that of InfiniBand, but people have speculated about that scenario for years and it hasn’t happened yet. If it does, Mellanox will have to reach into its bag of tricks once more.