News

Ethernet Will Get Spotlight at SC16

None
Nov. 2, 2016

By: Michael Feldman

In two weeks, the Ethernet Alliance will be talking up its namesake technology and roadmap at SC16 in front of more than 10,000 conference attendees. While Ethernet tends to get short shrift in most HPC circles these days, the November supercomputer conference provides an opportunity for Ethernet’s backers to remind users that the standards bodies and its commercial adopters have been busy pushing the technology to much faster speeds.

The Alliance’s principle effort at SC16 is centered around a multi-vendor interoperability demo at their booth that shows how the Ethernet can be applied to HPC setups.  From the press release:

The Ethernet Alliance’s SC16 demo combines server, switch, testing, cabling, and optical equipment from a diverse array of vendors in a simulated real-world data center environment, and underscores Ethernet’s importance to HPC. Among technologies and equipment being showcased in the demo are copper and fiber cable assemblies; fiber trunks; and 1,10, 25, 40, 50, and 100GbE switches; and 25, 50, and 100GbE NIC adapters. 

As it stands today, Ethernet deployment in HPC is wide, but not deep, being most prevalent in storage subsystems, cloud environments, and application areas like data analytics. These tend to reflect circumstances where either HPC and non-HPC usage overlaps or in that gray area that exists between HPC, enterprise and web computing. In mainstream HPC, that is, science and engineering applications running on tightly wound clusters, InfiniBand dominates. The reason for this is rather straightforward: for cluster computing, InfiniBand delivers higher bandwidth and lower latency than Ethernet, and does so cost-effectively.

It should be noted that in the latest TOP500 list released in June, Ethernet-based systems were actually more prevalent that InfiniBand-based ones (218 vs 206). But that has to do with the fact the TOP500 list contains quite a few non-HPC systems, specifically, clusters that provide generic cloud computing, web hosting, and telecom services. When you remove these systems, InfiniBand dominates the remaining list rather convincingly, powering over two-thirds of those machines.

Nonetheless, Ethernet has a story to tell to HPC users.  For starters, unlike InfiniBand, which has but a single provider in Mellanox, the Ethernet is supported by a vast ecosystem of hardware and software vendors. That vastness, though, makes it harder to pull that ecosystem from one speed bump to the next.

But by at least one measure – bandwidth -- Ethernet has caught up to InfiniBand. EDR InfiniBand and 100GbE shuttle data at essentially the same speed. While 100GbE is not currently cost-effective as a cluster interconnect, a range of products is available today from a number of vendors. Ironically, the lone InfiniBand provider, Mellanox, has the most advanced 100GbE portfolio, although Cisco, Juniper Networks, Chelsio, Broadcom, QLogic, Arista, and others have products in the field as well.

Today, the most common usage for 100GbE is as a long-haul network. One such example, which will be demonstrated at SC16, is a 100GbE link between the Ethernet Alliance and Caltech (California Institute of Technology). The idea is to show how the technology can provide a high-bandwidth network for geographically separated HPC sites that need to transfer massive datasets between them.

In the realm of cluster interconnects, Ethernet is making a transition from 10GbE to 25GbE.  This technology is mainly aimed at hyperscale environments in the aforementioned application segments of cloud computing, web, and telecom. However, where HPC applications don’t require the top data rates and the lower latencies afforded by InfiniBand, 25GbE technology can make sense as well.

For those interested, the Ethernet Alliance’s demo can be found in booth 1101 on the SC16 show floor. The exhibition will take place at the Calvin L. Rampton Salt Palace Convention Center in Salt Lake City and will run from November 14th through the 17th.