June 2022

The 59th edition of the TOP500 revealed the Frontier system to be the first true exascale machine with an HPL score of 1.102 Exaflop/s.

The No. 1 spot is now held by the Frontier system at Oak Ridge National Laboratory (ORNL) in the US. Based on the latest HPE Cray EX235a architecture and equipped with AMD EPYC 64C 2GHz processors, the system has 8,730,112 total cores, a power efficiency rating of 52.23 gigaflops/watt, and relies on gigabit ethernet for data transfer.

However, a recent development to the Frontier system has allowed the machine to surpass the 1 exaflop barrier. With an exact HPL score of 1.102 Exaflop/s, Frontier is not only the most powerful supercomputer to ever exist – it’s also the first true exascale machine.

The top position was previously held for two years straight by the Fugaku system at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. Sticking with its previous HPL benchmark score of 442 PFlop/s, Fugaku has now dropped to No. 2. Considering the fact that Fugaku’s theoretical peak is above the 1 exaflop barrier, there’s cause to also call this system an exascale machine as well. However, Frontier is the only system able to demonstrate this on the HPL benchmark test.

Another change within the TOP10 is the introduction of the LUMI system at EUROHPC/CSC in Finland. Now occupying the No. 3 spot, this new system has 1,110,144 cores and has a HPL benchmark of nearly 152 PFlop/s. LUMI is also noteworthy in that it is the largest system in Europe.

Finally, another change within the TOP10 occurred at the No. 10 spot with the new addition of the Adastra system at GENCI-CINES in France. It achieved an HPL benchmark score of 46.1 Pflop/s and is the second most powerful machine in Europe, behind LUMI.

Here is a summary of the system at the Top10:

  • Frontier is the new No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a peak performance exceeding one ExaFlop/s. It is currently being integrated and tested at the ORNL in Tennessee, USA, where it will be operated by the Department of Energy (DOE). It currently has achieved 1.102 Exaflop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI with AMD Instinct™ 250X accelerators and Slingshot-11 interconnect.
  • Fugaku, now the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s. This puts it 3x ahead of the No. 3 system in the list. 
  • The new LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland, is the new No. 3 with a performance of 151.9 Pflop/s just ahead of No 4. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is in CSC's data center in Kajaani, Finland.
  • Summit, an IBM-built system at ORNL in Tennessee, USA, is now listed at the No. 4 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 5. Its architecture is very similar to the #4 systems Summit. It is built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
  • Sunway TaihuLight is a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, China's Jiangsu province, is listed at the No. 6 position with 93 Pflop/s.
  • Perlmutter at No. 7 is based on the HPE Cray "Shasta" platform, and a heterogeneous system with AMD EPYC based nodes and 1536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
  • Now at No. 8, Selene is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 9 system with 61.4 Pflop/s.
  • The Adastra system installed at GENCI-CINES is new to the list at No. 10. It is the third new HPE Cray EX system and the second fastest system in Europe. It achieved 46.1 Pflop/s.
Rank System Cores Rmax (PFlop/s) Rpeak (PFlop/s) Power (kW)
1 Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE
DOE/SC/Oak Ridge National Laboratory
United States
8,730,112 1,102.00 1,685.65 21,100
2 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu
RIKEN Center for Computational Science
Japan
7,630,848 442.01 537.21 29,899
3 LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE
EuroHPC/CSC
Finland
1,110,144 151.90 214.35 2,942
4 Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM
DOE/SC/Oak Ridge National Laboratory
United States
2,414,592 148.60 200.79 10,096
5 Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM / NVIDIA / Mellanox
DOE/NNSA/LLNL
United States
1,572,480 94.64 125.71 7,438
6 Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC
National Supercomputing Center in Wuxi
China
10,649,600 93.01 125.44 15,371
7 Perlmutter - HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10, HPE
DOE/SC/LBNL/NERSC
United States
761,856 70.87 93.75 2,589
8 Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband, Nvidia
NVIDIA Corporation
United States
555,520 63.46 79.22 2,646
9 Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000, NUDT
National Super Computer Center in Guangzhou
China
4,981,760 61.44 100.68 18,482
10 Adastra - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE
Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES)
France
319,072 46.10 61.61 921