Intel Makes Presence Felt at Supercomputing Conference

Although Intel had no blockbuster reveals at this year’s supercomputing conference (SC16), they did issue a flurry of announcements to remind everyone that they still dominate much of the componentry in the HPC industry. And if anything, they are looking to extend their hegemony, as well as move into new areas.


Source: Intel


One of the few areas it doesn’t dominate is the accelerator/manycore processor category, where NVIDIA maintains a significant market share advantage. In the latest TOP500 list, NVIDIA GPUs are used in slightly more than half of the systems that contain accelerators, while Intel claims one quarter of those systems with its Xeon Phi processors. NVIDIA also has large deployments in systems for hyperscale customers, the vast majority of which don't show up in the TOP500 list.

From Intel’s point of view, the latest TOP500 numbers actually a represent a significant increase from last year, helped out by the fact that there are nine new systems on the list powered by the latest “Knights Landing” Xeon Phi processors. That includes, two new top 10 systems, Cori and Oakforest-PACS, which claim the number five and six spots on the TOP500, respectively. The new Xeon Phi also powers Marconi, a six-petaflop system installed at CINECA in Italy. Likewise, Knight Landing chips are used inTheta, a five-petaflop warm-up machine for Aurora, one of the DOE’s 100-plus petaflop system to be deployed next year as part of the CORAL (Collaboration of Oak Ridge, Argonne and Lawrence Livermore) program. All of these installations were based on the standalone (self-hosted) version of Knights Landing; the coprocessor variant will be available next year.

Intel is also making headway on the interconnect front. Its new 100 Gbps Omni-Path (OPA) interconnect showed up in 28 of the TOP500 systems, including the aforementioned Oakforest-PACS supercomputer. For the time being, that gives Intel a significant lead over the other 100 Gbps offering, Mellanox’s EDR InfiniBand, which is currently deployed in just 12 systems. Notwithstanding its success in the TOP500 list, Intel says OPA has been “shipping broadly” over the last nine months and has been deployed in more than 100 HPC clusters over that period.

On the reconfigurable computing front, Intel announced it had teamed with Chinese OEM Inspur on a server equipped with an Altera Arria 10 FPGA card. The 35-watt device manages to pump out 1.5 teraflops if one is inclined to use it for traditional scientific number crunching. Its real value though is for customers who want to employ it for less traditional HPC work like data compression, network processing, high frequency trading, image recognition, speech recognition and other deep learning use cases. It is especially interesting for those workloads in settings where power is a constraining factor. Inspur is likely eyeing the Chinese hyperscale firms as customers for this product.

Intel offered a sneak peak into the next-generation Skylake Xeon processors, which they are demonstrating at the conference. The new chips will incorporate the Advanced Vector Extensions-512 (AVX-512), which debuted in the Knights Landing Xeon Phi processors. The future Xeon chips will also get an integrated Omni-Path network adapter, another feature that will soon debut in the Xeon Phi. The only major Xeon Phi feature that won’t get copied into the Skylake server parts is the hybrid memory cube (HMC) technology, although maybe that will find its way into the Xeon product line at some point.

In any case, with the addition of AVX-512 and the integrated Omni-Path capability, Intel will have narrowed the difference between the two server lines considerably. Setting aside the HMC technology, the Xeon Phi is looking more like a manycore version of the multicore Xeon, although the core architectures are still quite different. Despite all that, it's plausible to think that these two product lines might be headed for convergence somewhere down the road.

The Skylake Xeons are scheduled for release in mid-2017, which should be roughly the same time as when the “Knights Mill” Xeon Phi product becomes available. Knights Mill will be equipped with half-precision math capabilities to make it more suitable for the kinds of machine learning work that NVIDIA has been so successful at capturing. Also coming in 2017 is Intel’s Deep Learning Inference Accelerator product, which is an integrated hardware/software solution aimed at convolutions neural networks. It’s powered by the same Arria 10 FPGA that is used in the Inspur solution.

Machine learning is an application category that Intel is currently playing catch-up in, but it appears to be quickly forming a strategy to get itself back in the game. Beside the hardware just mentioned, the company has collected a number of software components for this application segment, including the IP it acquired when it purchased Nervana Systems, the open source Trusted Analytics Platform (TAP), and Intel’s own Deep Learning Software Development Kit. On November 17th, the company is hosting an “Intel AI Day” in San Francisco, where we might get a better sense of how the chipmaker is planning to piece these assets together. Stay tuned.

Current rating: 2.8


There are currently no comments

New Comment


required (not published)