Even though there wasn’t much turnover in the latest TOP500 list, a number of new petascale supercomputers appeared that reflect a number of interesting trends in the way HPC architectures are evolving. For the purposes of this discussion, we’ll focus on three of these new systems: Stampede2, TSUBAME 3.0, and MareNostrum 4.
For the first time in several years, AMD has brought a server chip to market that provides some real competition to Intel and its near total domination of the datacenter market. The new AMD silicon, known as the EPYC 7000 series processors, come with up to 32 cores, along with a number of features that offer some useful differentiation against its Xeon competition.
The fourth-generation MareNostrum supercomputer is up and running at the Barcelona Supercomputer Centre (BSC), or at last the first phase of it is. When completed, it will contain the most interesting medley of processors of any supercomputer in existence. We asked Sergi Girona, Director of Operations at BSC, to describe the makeup of the new system and explain the rationale for building such a diverse machine.
The recent upgrade to the Piz Daint supercomputer at the Swiss National Supercomputing Centre (CSCS), has thrust the machine into limelight here at the ISC High Performance conference on opening day. The Cray XC50 system turned in a Linpack result of 19.6 petaflops, which was good enough to capture the number three spot on the latest TOP500 list, and displace Titan, the former GPU supercomputing champ, running at Oak Ridge National Laboratory.
In 1999, Peter Braam introduced Lustre, an open-source parallel file system which went on to become one the most popular software packages for supercomputing. Braam’s success with Lustre was just the start of a career in which he founded five companies and guided technology development in numerous others -- Sun Microsystem, Red Hat, and Xyratex, among them. He’s currently working with Cambridge University on the Square Kilometer Array project and its Science Data Processor effort.
When assessing the fastest supercomputers in the world, system performance is king, while the I/O componentry that feeds these computational beasts often escapes notice. But a small group of storage devotees working on a project at the Virtual Institute for I/O (VI4IO) want to change that.
IBM and its partners have developed a novel technology to build 5nm chips, based on silicon nanosheet transistors. Compared to 10nm chips using FinFET transistors, the new technology promises to deliver a 40 percent performance increase, a 75 percent power savings, or some combination of the two.
NVIDIA is hooking up with four of the world’s largest original design manufacturers (ODMs) to help accelerate adoption of its GPUs into hyperscale datacenters. The new partner program would give Foxconn, Inventec, Quanta and Wistron early access to the HGX reference architecture, NVIDIA’s server design for machine learning acceleration.
Building a quantum computer that can outperform conventional systems on certain types of algorithms looks to be tantalizingly close. As it stands today, Google and IBM appear to be the most likely candidates to claim that achievement.
The FY 2018 Congressional budget request for the Department of Energy has been released, reflecting a White House that favors supercomputing infrastucture over scientific research. That turns out to be both good news and bad news for the HPC community.