News

AMD Solidifies HPC Story with Latest Chips, Vendor Partnerships

None
Nov. 21, 2017

By: Michael Feldman

After largely ignoring the International Supercomputing Conference (ISC 2017) in Frankfurt this past June, AMD made good use of its time at SC17 in Denver last week to flesh out its high performance computing strategy and show off its latest EPYC CPUs and Radeon Instinct GPUs.

 

 

To regain a foothold in the HPC market, AMD has a formidable task ahead of it. The company’s two major competitors, Intel and NVIDIA, have established dominant positions in their respective roles as providers of chips for the high performance computing market – Intel with its Xeon processors and NVIDIA with its Tesla GPU accelerators. But with the new EPYC CPU and Radeon Instinct GPU, AMD may have a processor combo to win back some market share

As we wrote back in July, a month after the new EPYC processor lineup was launched, AMD had a compelling story to tell around its newest CPUs. Back then, benchmarking performed by Anandtech showed that the EPYC chips outperformed Intel’s new Skylake-generation Xeon-SP (Scalable processor) for floating-point-intensive codes like rendering and molecular dynamics. Considering the new Xeon chip sported AVX-512, a 512-bit vector processing capability, which should have given it a natural advantage over the 128-bit vector implementation on EPYC, those results were rather unexpected. But as the Anandtech report noted, a lot of codes don’t make use of the more exotic vector extension set, instead relying on traditional floating point instructions. And in those cases, EPYC’s FPU silicon appears to have a clear advantage.

Subsequent analysis by Joel Hruska at ExtremeTech also noted that the use of AVX instructions caps the maximum turbo frequencies on the Xeon chips, with the more muscular AVX-512 incurring the largest penalty. As a result, Hruska speculated that AMD may have made the better architectural choice by optimizing FPU throughput rather than vector processing horsepower. His conclusion: “While higher efficiency should theoretically be able to still show significant AVX-512 performance improvements, they’re only going to happen with substantial performance tuning. Not all software vendors or buyers can afford that kind of work, but it’ll be critical for AVX-512 to be a success."

Since EPYC was launched in June, AMD has been gathering OEMs, ODMs, and system integrators to build HPC systems with the new chip. The ones highlighted at SC17 are HPE, Supermicro, Penguin Computing, Tyan, ASUS, Gigabyte Technology, BOXX, and EchoStreams. Dell EMC had previously announced it would be adding EPYC servers to its flagship PowerEdge lineup.

And now this week AMD announced that one of the HPE offerings, the DL385 Gen10 server, set world records for two-socket servers on the two standard floating point benchmarks: the SPECrate2017_fp_base and SPECfp_rate2006. The 64-core server scored 257 on SPECrate2017_fp_base and 1980 on SPECfp_rate2006, both of which are higher than any currently published SPEC result, including those posted for Xeon-SP-powered servers.

At SC17, AMD also talked up its Project 47 (P47) HPC platform that was designed in collaboration with ODM Inventec. This system employs both EPYC CPUs and Radeon Instinct GPUs to deliver a petaflop in a rack using a 1:4 CPU:GPU ratio. The P47 was demonstrated last summer, but last week was the first time the chipmaker was able to talk about it in front of an international HPC audience. AMD is taking orders for the systems now and they are expected to be ready for delivery in Q1 2018.

The renewed focus on HPC comes at an opportune moment. As we reported last week, Intel has abandoned its upcoming Xeon Phi “Knights Hill” offering under a cloud of uncertainty about the entire product line. Without a manycore x86 product on Intel's roadmap, AMD’s EPYC-Radeon Instinct GPU combo looks a lot more attractive.

But the challenge on the Radeon side is, if anything, more formidable than that faced by the EPYC processors alone. That’s because the EPYC chips can ride atop the extensive x86 software ecosystem, while the Radeon GPUs rely almost entirely on software support from AMD. NVIDIA has been masterful at building its own CUDA software ecosystem around its GPU computing products, and has established a critical mass of application developers and third-party system support that makes its chips accessible to nearly anyone with an accelerator-friendly application.

AMD, meanwhile, is still playing catch-up here, and is trying to build momentum around its own ROCm (Radeon Open Compute) system software and libraries. Its primary advantage is that it can provide both the host processor and accelerator hardware, so is able to fine-tune its software to optimize that interaction. Unfortunately, NVIDIA has a big lead here, which will make this an uphill slog for AMD.

Nevertheless, AMD appears to be rededicated to the high performance computing space, at least for the time being, and with users anxiously looking for alternatives to Intel and NVIDIA, the chipmaker may find more than a few customers willing to switch camps. AMD’s ability to undercut Intel’s pricing on the server CPU side should certainly attract HPC buyers who are driven more by price-performance than absolute performance. And if the company can duplicate that same advantage on the GPU side, AMD may indeed have found a winning formula that can get them back in the HPC game.