Highlights - June 2022
This is the 59th edition of the TOP500.
The 59th edition of the TOP500 revealed the Frontier system to be the first true exascale machine with an HPL score of 1.102 Exaflop/s.
We have a new No 1, the Frontier system at the Oak Ridge National Laboratory (ORNL), Tennessee, USA. Frontier brings the pole position back to the USA after it was held for 2 years by the Fugaku system at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. The Frontier system is currently being integrated and tested at ORNL. It has a peak performance of 1.6 ExaFlop/s and has achieved so far, an HPL benchmark score of 1.102 Eflop/s. On the HPL-AI benchmark, which measure performance for mixed precision calculation, Frontier already demonstrated 6.86 Exaflops!
We also have a new No. 3, the LUMI system at EuroHPC/CSC in Finland, and the largest system in Europe. The third newcomer to the top 10 is at No. 10, the Adastra system at GENCI-CINES in France.
All 3 new systems in the top 10 are based on the latest HPE Cray EX235a architecture, which combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI with AMD Instinct™ 250X accelerators, and Slingshot interconnects.
Here is a summary of the system at the Top10:
- Frontier is the new No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a peak performance exceeding one ExaFlop/s. It is currently being integrated and tested at the ORNL in Tennessee, USA, where it will be operated by the Department of Energy (DOE). It currently has achieved 1.102 Exaflop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI with AMD Instinct™ 250X accelerators and Slingshot-11 interconnect.
- Fugaku, now the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s. This puts it 3x ahead of the No. 3 system in the list.
- The new LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland, is the new No. 3 with a performance of 151.9 Pflop/s just ahead of No 4. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is in CSC's data center in Kajaani, Finland.
- Summit, an IBM-built system at ORNL in Tennessee, USA, is now listed at the No. 4 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
- Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 5. Its architecture is very similar to the #4 systems Summit. It is built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Rank |
Site |
System |
Cores |
Rmax (TFlop/s) |
Rpeak (TFlop/s) |
Power (kW) |
1 |
DOE/SC/Oak Ridge National Laboratory United States
|
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE |
8,730,112 |
1,102.00 |
1,685.65 |
21,100 |
2 |
RIKEN Center for Computational Science Japan
|
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu |
7,630,848 |
442.01 |
537.21 |
29,899 |
3 |
EuroHPC/CSC Finland
|
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE |
1,110,144 |
151.90 |
214.35 |
2,942 |
4 |
DOE/SC/Oak Ridge National Laboratory United States
|
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM |
2,414,592 |
148.60 |
200.79 |
10,096 |
5 |
DOE/NNSA/LLNL United States
|
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM / NVIDIA / Mellanox |
1,572,480 |
94.64 |
125.71 |
7,438 |
6 |
National Supercomputing Center in Wuxi China
|
Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway
NRCPC |
10,649,600 |
93.01 |
125.44 |
15,371 |
7 |
DOE/SC/LBNL/NERSC United States
|
Perlmutter - HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10
HPE |
761,856 |
70.87 |
93.75 |
2,589 |
8 |
NVIDIA Corporation United States
|
Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband
Nvidia |
555,520 |
63.46 |
79.22 |
2,646 |
9 |
National Super Computer Center in Guangzhou China
|
Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000
NUDT |
4,981,760 |
61.44 |
100.68 |
18,482 |
10 |
Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES) France
|
Adastra - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE |
319,072 |
46.10 |
61.61 |
921 |
- Sunway TaihuLight is a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, China's Jiangsu province, is listed at the No. 6 position with 93 Pflop/s.
- Perlmutter at No. 7 is based on the HPE Cray "Shasta" platform, and a heterogeneous system with AMD EPYC based nodes and 1536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
- Now at No. 8, Selene is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
- Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 9 system with 61.4 Pflop/s.
- The Adastra system installed at GENCI-CINES is new to the list at No. 10. It is the third new HPE Cray EX system and the second fastest system in Europe. It achieved 46.1 Pflop/s.
Highlights from the List
-
A total of 169 systems on the list are using accelerator/co-processor technology,
up from 151 six months ago.
54 of these use NVIDIA Ampere
chips, 84 use NVIDIA Volta, and
0 systems with 18.
-
Intel continues to provide the processors for the largest share (77.60 percent) of
TOP500 systems,
down from 81.60 % six months ago.
93 (18.60 %) of the systems in the current list used AMD processors,
up from 14.60 % six months ago.
-
Supercomputer Fugaku maintains the leadership followed by Summit in the #2 and LUMI in the #3 spots with respect to HPCG performance.
-
The entry level to the list moved up to the
1.65 Pflop/s mark on the Linpack
benchmark.
-
The last system on the newest list was listed at position 465 in the
previous TOP500.
-
Total combined performance of all 500 exceeded the Exaflop barrier with
now 4.40 exaflop/s (Eflop/s) up from
3.04 exaflop/s (Eflop/s) 6 months ago.
-
The entry point for the TOP100 increased to
5.39 Pflop/s.
-
The average concurrency level in the TOP500 is 182,864 cores
per system up from 162,520 six months ago.
General Trends
Installations by countries/regions:
HPC manufacturer:
Interconnect Technologies:
Processor Technologies:
Green500
- The data collection and curation of the Green500 project has been integrated with the TOP500 project. This
allows submissions of all data through a single webpage at http://top500.org/submit
Rank |
TOP500 Rank |
System |
Cores |
Rmax (TFlop/s) |
Power (kW) |
Energy Efficiency (GFlops/watts) |
1 |
29 |
Frontier TDS - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
, HPE
DOE/SC/Oak Ridge National Laboratory United States
|
120,832 |
19.20 |
309 |
62.684 |
2 |
1 |
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
, HPE
DOE/SC/Oak Ridge National Laboratory United States
|
8,730,112 |
1,102.00 |
21,100 |
52.227 |
3 |
3 |
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
, HPE
EuroHPC/CSC Finland
|
1,110,144 |
151.90 |
2,942 |
51.629 |
4 |
10 |
Adastra - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
, HPE
Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES) France
|
319,072 |
46.10 |
921 |
50.028 |
5 |
146 |
ATOS THX.A.B - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 64GB, Infiniband HDR
, EVIDEN
Atos France
|
25,056 |
3.50 |
86 |
41.411 |
6 |
326 |
MN-3 - MN-Core Server, Xeon Platinum 8260M 24C 2.4GHz, Preferred Networks MN-Core, MN-Core DirectConnect
, Preferred Networks
Preferred Networks Japan
|
1,664 |
2.18 |
53 |
40.901 |
7 |
315 |
SSC-21 Scalable Module - Apollo 6500 Gen10 plus, AMD EPYC 7543 32C 2.8GHz, NVIDIA A100 80GB, Infiniband HDR200
, HPE
Samsung Electronics South Korea
|
16,704 |
2.27 |
103 |
33.983 |
8 |
319 |
Tethys - NVIDIA DGX A100 Liquid Cooled Prototype, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100 80GB, Infiniband HDR
, Nvidia
NVIDIA Corporation United States
|
19,840 |
2.25 |
72 |
31.538 |
9 |
304 |
Wilkes-3 - PowerEdge XE8545, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 80GB, Infiniband HDR200 dual rail
, DELL EMC
University of Cambridge United Kingdom
|
26,880 |
2.29 |
74 |
30.797 |
10 |
105 |
Athena - FormatServer THOR ERG21, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100 SXM4 40 GB, Infiniband HDR
, Format sp. z o.o.
Cyfronet Poland
|
47,616 |
5.05 |
147 |
29.926 |
The system to claim the No. 1 spot for the GREEN500 is the Frontier Test & Development System (TDS) at ORNL in the US. With 120,832 total cores and an HPL benchmark of 19.2 PFlop/s, the Frontier TDS machine is basically just one rack identical to the the actual Frontier system.
In the second spot is the original Frontier system at ORNL in
the US. This machine earned the highest spot on the TOP500
list. This system is able to produce a whopping 1.102 Exaflop/s
HPL benchmark score while keeping its energy efficiency at 55.23
gigaflops/watt. Considering this machine was able to stay
competitive on the GREEN500 while becoming the first exascale
system shows how energy efficiency is becoming a top priority
for HPC facilities.
The No. 3 spot was taken by the LUMI system, which is quite an accomplishment for the newcomer. Despite being the largest system in Europe, LUMI has an impressive energy efficiency rating of 51.63 gigaflops/watt.
HPCG Results
- The Top500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
Rank |
TOP500 Rank |
System |
Cores |
Rmax (TFlop/s) |
HPCG (TFlop/s) |
1 |
2 |
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
,
RIKEN Center for Computational Science Japan
|
7,630,848 |
442.01 |
16004.50 |
2 |
4 |
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/SC/Oak Ridge National Laboratory United States
|
2,414,592 |
148.60 |
2925.75 |
3 |
3 |
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
,
EuroHPC/CSC Finland
|
1,110,144 |
151.90 |
1935.73 |
4 |
7 |
Perlmutter - HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10
,
DOE/SC/LBNL/NERSC United States
|
761,856 |
70.87 |
1905.44 |
5 |
5 |
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/NNSA/LLNL United States
|
1,572,480 |
94.64 |
1795.67 |
6 |
8 |
Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband
,
NVIDIA Corporation United States
|
555,520 |
63.46 |
1622.51 |
7 |
11 |
JUWELS Booster Module - Bull Sequana XH2000 , AMD EPYC 7402 24C 2.8GHz, NVIDIA A100, Mellanox HDR InfiniBand/ParTec ParaStation ClusterSuite
,
Forschungszentrum Juelich (FZJ) Germany
|
449,280 |
44.12 |
1275.36 |
8 |
18 |
Dammam-7 - Cray CS-Storm, Xeon Gold 6248 20C 2.5GHz, NVIDIA Tesla V100 SXM2, InfiniBand HDR 100
,
Saudi Aramco Saudi Arabia
|
672,520 |
22.40 |
881.40 |
9 |
12 |
HPC5 - PowerEdge C4140, Xeon Gold 6252 24C 2.1GHz, NVIDIA Tesla V100, Mellanox HDR Infiniband
,
Eni S.p.A. Italy
|
669,760 |
35.45 |
860.32 |
10 |
20 |
Wisteria/BDEC-01 (Odyssey) - PRIMEHPC FX1000, A64FX 48C 2.2GHz, Tofu interconnect D
,
Information Technology Center, The University of Tokyo Japan
|
368,640 |
22.12 |
817.58 |
Supercomputer Fugaku remains the leader on the HPCG benchmark with 16 PFlop/s.
The DOE systems Summit at ORNL remains at second positions with 2.93 HPCG-Pflop/s.
The third position was captured by the new LUMI system with 1.94 HPCG-petaflops.
About the TOP500 List
The first version of what became today’s TOP500 list started as an exercise for a small conference in
Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how
things had changed. About that time they realized they might be onto something and decided to continue compiling
the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.