The Texas Advanced Computing Center (TACC) has announced “Frontera,” a Dell EMC cluster that will be the world’s most powerful academic supercomputer when it comes online in the summer of 2019.
The system, which we first reported on back in July, was made possible by a $60 million award from the National Science Foundation (NSF). It represents the first phase of a two-phase project to provide additional HPC capacity for university researchers. Frontera is expected to operate for five years.
“Supercomputers — like telescopes for astronomy or particle accelerators for physics — are essential research instruments that are needed to answer questions that can't be explored in the lab or in the field,” said Dan Stanzione, TACC executive director. “Our previous systems have enabled major discoveries, from the confirmation of gravitational wave detections by the Laser Interferometer Gravitational-wave Observatory to the development of artificial-intelligence-enabled tumor detection systems. Frontera will help science and engineering advance even further.”
The new system is expected to be about twice as powerful as TACC’s 18.3-petaflop Stampede2 machine, which is currently the fastest academic supercomputer in the world. At around 36 peak petaflops, Frontera would likely be the number five system in the world today, assuming it delivers a reasonable Linpack result. The last time an NSF-funded machine cracked the top 10 was in November 2015.
Frontera will be built by Dell EMC and will be equipped with Intel processors. Given the summer 2019 launch date, theoretically those processors could be the upcoming Cascade Lake Xeon CPUs. If that’s not in the cards, then the system will most likely use one of the current Skylake Xeons. Stampede2, relied on Xeon Phi processors for much of its performance, but since Intel has shut down that product line, there will be no repeat performance in Frontera. According to TACC, NVIDIA will also have a role in the project, suggesting that at least some of the nodes will be equipped with either Tesla GPUs for compute acceleration or Quadro GPUs for visualization, or perhaps both.
Frontera’s nodes will be hooked together with Mellanox networking gear, while Data Direct Networks will provide the storage system. CoolIT will supply the system for cooling the CPU nodes, while Green Revolution Cooling’s (GRC) immersion cooling system will be used for the system's GPU nodes.* (TACC has been a GRC customer since 2009.) Amazon, Google, and Microsoft will also have a hand in the project, presumably to provide access to Frontera via their respective clouds.
Getting first crack at the new system will be projects in particle collisions from the Large Hadron Collider, global climate modeling, hurricane forecasting and multi-messenger astronomy. During its operation, it will also be used to support evaluation and testing for a potential phase 2 system, which, if funded and deployed, will be ten times faster than Frontera.
*[Editor's note: The original version of this article incorrectly suggested that the Frontera system would be entirely cooled with Green Revolution Cooling's system, rather than just its GPU nodes.]