Highlights - June 2022

This is the 59th edition of the TOP500.

The 59th edition of the TOP500 revealed the Frontier system to be the first true exascale machine with an HPL score of 1.102 Exaflop/s.

We have a new No 1, the Frontier system at the Oak Ridge National Laboratory (ORNL), Tennessee, USA. Frontier brings the pole position back to the USA after it was held for 2 years by the Fugaku system at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. The Frontier system is currently being integrated and tested at ORNL. It has a peak performance of 1.6 ExaFlop/s and has achieved so far, an HPL benchmark score of 1.102 Eflop/s. On the HPL-AI benchmark, which measure performance for mixed precision calculation, Frontier already demonstrated 6.86 Exaflops!

We also have a new No. 3, the LUMI system at EuroHPC/CSC in Finland, and the largest system in Europe. The third newcomer to the top 10 is at No. 10, the Adastra system at GENCI-CINES in France.

All 3 new systems in the top 10 are based on the latest HPE Cray EX235a architecture, which combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI with AMD Instinct™ 250X accelerators, and Slingshot interconnects.

Here is a summary of the system at the Top10:

  • Frontier is the new No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a peak performance exceeding one ExaFlop/s. It is currently being integrated and tested at the ORNL in Tennessee, USA, where it will be operated by the Department of Energy (DOE). It currently has achieved 1.102 Exaflop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI with AMD Instinct™ 250X accelerators and Slingshot-11 interconnect.
  • Fugaku, now the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s. This puts it 3x ahead of the No. 3 system in the list. 
  • The new LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland, is the new No. 3 with a performance of 151.9 Pflop/s just ahead of No 4. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is in CSC's data center in Kajaani, Finland.
  • Summit, an IBM-built system at ORNL in Tennessee, USA, is now listed at the No. 4 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 5. Its architecture is very similar to the #4 systems Summit. It is built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 DOE/SC/Oak Ridge National Laboratory
United States
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
8,730,112 1,102.00 1,685.65 21,100
2 RIKEN Center for Computational Science
Japan
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu
7,630,848 442.01 537.21 29,899
3 EuroHPC/CSC
Finland
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
1,110,144 151.90 214.35 2,942
4 DOE/SC/Oak Ridge National Laboratory
United States
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM
2,414,592 148.60 200.79 10,096
5 DOE/NNSA/LLNL
United States
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM / NVIDIA / Mellanox
1,572,480 94.64 125.71 7,438
6 National Supercomputing Center in Wuxi
China
Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway
NRCPC
10,649,600 93.01 125.44 15,371
7 DOE/SC/LBNL/NERSC
United States
Perlmutter - HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10
HPE
761,856 70.87 93.75 2,589
8 NVIDIA Corporation
United States
Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband
Nvidia
555,520 63.46 79.22 2,646
9 National Super Computer Center in Guangzhou
China
Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000
NUDT
4,981,760 61.44 100.68 18,482
10 Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES)
France
Adastra - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
319,072 46.10 61.61 921
  • Sunway TaihuLight is a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, China's Jiangsu province, is listed at the No. 6 position with 93 Pflop/s.
  • Perlmutter at No. 7 is based on the HPE Cray "Shasta" platform, and a heterogeneous system with AMD EPYC based nodes and 1536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
  • Now at No. 8, Selene is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 9 system with 61.4 Pflop/s.
  • The Adastra system installed at GENCI-CINES is new to the list at No. 10. It is the third new HPE Cray EX system and the second fastest system in Europe. It achieved 46.1 Pflop/s.

Highlights from the List

  • A total of 169 systems on the list are using accelerator/co-processor technology, up from 151 six months ago. 54 of these use NVIDIA Ampere chips, 84 use NVIDIA Volta, and 7 systems with 17.

  • Intel continues to provide the processors for the largest share (77.60 percent) of TOP500 systems, down from 81.60 % six months ago. 93 (18.60 %) of the systems in the current list used AMD processors, up from 14.60 % six months ago.

  • Supercomputer Fugaku maintains the leadership followed by Summit in the #2 and LUMI in the #3 spots with respect to HPCG performance.

  • The entry level to the list moved up to the 1.65 Pflop/s mark on the Linpack benchmark.

  • The last system on the newest list was listed at position 465 in the previous TOP500.

  • Total combined performance of all 500 exceeded the Exaflop barrier with now 4.40 exaflop/s (Eflop/s) up from 3.04 exaflop/s (Eflop/s) 6 months ago.

  • The entry point for the TOP100 increased to 5.39 Pflop/s.

  • The average concurrency level in the TOP500 is 182,864 cores per system up from 162,520 six months ago.

General Trends

Installations by countries/regions:

HPC manufacturer:

Interconnect Technologies:

Processor Technologies:

Green500

HPCG Results

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.