Highlights - June 2025

This is the 65th edition of the TOP500.

Here is a summary of the system in the Top 10:

  • The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 Exaflop/s on the HPL benchmark. El Capitan has 11,039,616 cores and is based on AMD 4th generation EPYC™ processors with 24 cores at 1.8 GHz and AMD Instinct™ MI300A accelerators. It uses the HPE Slingshot interconnect for data transfer and achieves an energy efficiency of 58.9 Gigaflops/watt. The system also achieved 17.41 Petaflop/s on the HPCG benchmark which makes it the new leader on this ranking as well

  • Frontier is the No. 2 system in the TOP500. This HPE Cray EX system was the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.353 Exaflop/s using 8,699,904 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot interconnect.

  • Aurora is currently the No. 3 with a HPL score of 1.012 Exaflop/s. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE). This new Intel system is based on HPE Cray EX - Intel Exascale Compute Blades. It uses Intel Xeon CPU Max Series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot interconnect.

  • JUPITER Booster is the new No. 4 system. It is installed at EuroPHC/FZJ in Jülich, Germany where it is operated by the Jülich Supercomputing Centre. It is based on the Eviden’s BullSequana XH3000 direct liquid cooled architecture which utilizes Grace Hopper Superchips. It is currently being commissioned and achieved a preliminary HPL value of 793.4 Petaflop/s on a partial system.

  • Eagle the No. 5 system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Petaflop/s.

Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 DOE/NNSA/LLNL
United States
El Capitan - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, TOSS
HPE
11,039,616 1,742.00 2,746.38 29,581
2 DOE/SC/Oak Ridge National Laboratory
United States
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE Cray OS
HPE
9,066,176 1,353.00 2,055.72 24,607
3 DOE/SC/Argonne National Laboratory
United States
Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4GHz, Intel Data Center GPU Max, Slingshot-11
Intel
9,264,128 1,012.00 1,980.01 38,698
4 EuroHPC/FZJ
Germany
JUPITER Booster - BullSequana XH3000, GH Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, RedHat Enterprise Linux
EVIDEN
4,801,344 793.40 930.00 13,088
5 Microsoft Azure
United States
Eagle - Microsoft NDv5, Xeon Platinum 8480C 48C 2GHz, NVIDIA H100, NVIDIA Infiniband NDR
Microsoft Azure
2,073,600 561.20 846.84
6 Eni S.p.A.
Italy
HPC6 - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, RHEL 8.9
HPE
3,143,520 477.90 606.97 8,461
7 RIKEN Center for Computational Science
Japan
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu
7,630,848 442.01 537.21 29,899
8 Swiss National Supercomputing Centre (CSCS)
Switzerland
Alps - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11, HPE Cray OS
HPE
2,121,600 434.90 574.84 7,124
9 EuroHPC/CSC
Finland
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
2,752,704 379.70 531.51 7,107
10 EuroHPC/CINECA
Italy
Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband
EVIDEN
1,824,768 241.20 306.31 7,494
  • The No. 6 system is called HPC6 and installed at Eni S.p.A center in Ferrera Erbognone in Italy. It is another HPE Cray EX235a system with 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot interconnect. It achieved 477.9 Petaflop/s.

  • Fugaku, the No. 7 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Petaflop/s. It is now the second fastest system on the HPCG benchmark with 16 Teraflop/s.

  • The Alps system installed at the Swiss National Supercomputing Centre (CSCS) in Switzerland is now at No. 8. It is an HPE Cray EX254n system with NVIDIA Grace 72C and NVIDIA GH200 Superchip and a Slingshot interconnect. It achieved 434.9 Petaflop/s.

  • The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is at the No. 9 with a performance of 380 Petaflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.

  • The No. 10 system Leonardo is installed at another EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a HPL performance of 241.2 Petaflop/s.

Highlights from the List

  • A total of 237 systems on the list are using accelerator/co-processor technology, up from 210 six months ago. 82 of these use 18 chips, 68 use NVIDIA Ampere, and 27 systems with NVIDIA Volta.

  • Intel continues to provide the processors for the largest share (58.80 percent) of TOP500 systems, down from 61.80 % six months ago. 173 (34.60 %) of the systems in the current list used AMD processors, up from 32.40 % six months ago.

  • The entry level to the list moved up to the 2.44 Pflop/s mark on the Linpack benchmark.

  • The last system on the newest list was listed at position 456 in the previous TOP500.

  • Total combined performance of all 500 exceeded the Exaflop barrier with now 13.84 exaflop/s (Eflop/s) up from 11.72 exaflop/s (Eflop/s) 6 months ago.

  • The entry point for the TOP100 increased to 16.59 Pflop/s.

  • The average concurrency level in the TOP500 is 275,414 cores per system up from 257,970 six months ago.

General Trends

Installations by countries/regions:

HPC manufacturer:

Interconnect Technologies:

Processor Technologies:

Green500

In the Green500 the systems of the TOP500 are ranked by how much computational performance they deliver on the HPL benchmark per Watt of electrical power consumed. This electrical power efficiency is measured in Gigaflops/Watt. This ranking is not driven by the size of a system but by its technology and the ranking order looks therefor very different from the TOP500. The computational efficiency of a system tends to slightly decrease with system size, which among technologically identical system gives smaller system the advantage. Here are the top 10 of the Green500 ranking:

  • JEDI once again claimed the No. 1 spot – JUPITER Exascale Development Instrument, a system from EuroHPC/FZJ in Germany. JEDI repeated its energy efficiency rating from the last list at 72.73 GFlops/Watt while producing an HPL score of 4.5 PFlop/s. JEDI is a BullSequana XH3000 machine with a Grace Hopper Superchip, an NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, and 19,584 total cores.

  • In the second place is the ROMEO-2025 system at the ROMEO HPC Center - Champagne- Ardenne in France. With 47,328 total cores and an HPL benchmark of 9.863 PFlop/s, and achieved an efficiency of 70.9 GFlops/Watt. The architecture of this system is identical to the No. 1 system JEDI, but as it is more than twice as large its energy efficiency is slightly lower.

  • The No. 3 spot was taken by the Adastra 2 system at the Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES) in France. This is a HPE Cray EX255a system with AMD 4th Gen EPYC 24 core 1.8GHz processors, AMD Instinct MI300A accelerator, and Slingshot, running RHEL. With 16,128 cores total it achieved 2.529 PFlop/s HPL performance and an efficiency of 69.1 GFlops/Watt.

The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at http://top500.org/submit.

Rank TOP500 Rank System Cores Rmax (PFlop/s) Power (kW) Energy Efficiency (GFlops/watts)
1 259 JEDI - BullSequana XH3000, Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200 , ParTec/EVIDEN
EuroHPC/FZJ
Germany
19,584 4.50 67 72.733
2 148 ROMEO-2025 - BullSequana XH3000, Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, Red Hat Enterprise Linux , EVIDEN
ROMEO HPC Center - Champagne-Ardenne
France
47,328 9.86 160 70.912
3 484 Adastra 2 - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, RHEL , HPE
Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES)
France
16,128 2.53 37 69.098
4 183 Isambard-AI phase 1 - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11 , HPE
University of Bristol
United Kingdom
34,272 7.42 117 68.835
5 255 Otus (GPU only) - ThinkSystem SD665-N V3, AMD EPYC 9655 96C 2.6GHz, NVIDIA H100 SXM5 80GB, Infiniband NDR, Rocky Linux 9.4 , Lenovo
Universitaet Paderborn - PC2
Germany
19,440 4.66 68.177
6 66 Capella - Lenovo ThinkSystem SD665-N V3, AMD EPYC 9334 32C 2.7GHz, Nvidia H100 SXM5 94Gb, Infiniband NDR200, AlmaLinux 9.4 , MEGWARE
TU Dresden, ZIH
Germany
85,248 24.06 445 68.053
7 304 SSC-24 Energy Module - HPE Cray XD670, Xeon Gold 6430 32C 2.1GHz, NVIDIA H100 SXM5 80GB, Infiniband NDR400, RHEL 9.2 , HPE
Samsung Electronics
South Korea
11,200 3.82 69 67.251
8 85 Helios GPU - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11 , HPE
Cyfronet
Poland
89,760 19.14 317 66.948
9 399 AMD Ouranos - BullSequana XH3000, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Infiniband NDR200, RedHat Enterprise Linux , EVIDEN
Atos
France
16,632 2.99 48 66.464
10 412 Henri - ThinkSystem SR670 V2, Intel Xeon Platinum 8362 32C 2.8GHz, NVIDIA H100 80GB PCIe, Infiniband HDR , Lenovo
Flatiron Institute
United States
8,288 2.88 44 65.396

HPCG Results

The TOP500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.

  • El Capitan is the new leader on the HPCG benchmark with 17.1 HPCG-PFlop/s.

  • Supercomputer Fugaku, the long-time leader, is now in second position with 16 HPCG-PFlop/s.

  • The DOE system Frontier at ORNL remains in the third position with 14.05 HPCG-PFlop/s.

  • The Aurora system is now in fourth position with 5.6 HPCG-PFlop/s.

Rank TOP500 Rank System Cores Rmax (PFlop/s) HPCG (PFlop/s)
1 1 El Capitan - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, TOSS ,
DOE/NNSA/LLNL
United States
11,039,616 1,742.00 17.41
2 7 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D ,
RIKEN Center for Computational Science
Japan
7,630,848 442.01 16.00
3 2 Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE Cray OS ,
DOE/SC/Oak Ridge National Laboratory
United States
9,066,176 1,353.00 14.05
4 3 Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4GHz, Intel Data Center GPU Max, Slingshot-11 ,
DOE/SC/Argonne National Laboratory
United States
9,264,128 1,012.00 5.61
5 9 LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11 ,
EuroHPC/CSC
Finland
2,752,704 379.70 4.59
6 8 Alps - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11, HPE Cray OS ,
Swiss National Supercomputing Centre (CSCS)
Switzerland
2,121,600 434.90 3.67
7 10 Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband ,
EuroHPC/CINECA
Italy
1,824,768 241.20 3.11
8 15 ABCI 3.0 - HPE Cray XD670, Xeon Platinum 8558 48C 2.1GHz, NVIDIA H200 SXM5 141 GB, Infiniband NDR200, Rocky Linux 9 ,
National Institute of Advanced Industrial Science and Technology (AIST)
Japan
479,232 145.10 2.45
9 25 Perlmutter - HPE Cray EX 235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-11 ,
DOE/SC/LBNL/NERSC
United States
888,832 79.23 1.91
10 20 Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband ,
DOE/NNSA/LLNL
United States
1,572,480 94.64 1.80

HPL-MxP Results

On the HPL-MxP benchmark, which measures performance for mixed-precision calculations, the Aurora system achieved 11.6  Exaflops narrowly ahead of Frontier at 11.4 Exaflops. This is the same situation as last time: both machines submitted new result and Aurora came out ahead for the second time.

The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.

  • This year’s winner of the HPL-MxP category is the El Capitan system with 16.7 Exaflop/s.

  • Aurora is in second place with an 11.6 Exaflop/s score on the HPL-MxP benchmark.

  • Frontier remains in third place with a score of 11.4 Exaflop/s.

Rank Site Computer Cores HPL Rmax
(Eflop/s)
TOP500
Rank
HPL-MxP
(Eflop/s)
Speedup
1 DOE/SC/LLNL, USA El Capitan, HPE Cray 255a, AMD 4th Gen EPYC 24C 1.8 GHz, AMD Instinct MI300A, Slingshot-11 11,039,616 1.742 1 16.7 9.6
2 DOE/SC/ANL, USA Aurora, HPE Cray EX, Intel Max 9470 52C, 2.4 GHz, Intel GPU MAX, Slingshot-11 8,159,232 1.012 3 11.6 11.5
3 DOE/SC/ORNL, USA Frontier, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 8,560,640 1.353 2 11.4 8.4
4 AIST, Japan ABCI 3.0, HPE Cray XD670, Xeon Platinum 8558 48C 2.1GHz, NVIDIA H200 SXM5 141 GB, InfiniBand NDR200, HPE 479,232 0.145 15 2.36 16.3
5 EuroHPC/CSC, Finland LUMI, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 2,752,704 0.380 9 2.35 6.2
6 RIKEN Center for Computational Science, Japan Fugaku, Fujitsu A64FX 48C 2.2GHz, Tofu D 7,630,848 0.442 7 2.0 4.5
7 EuroHPC/CINECA, Italy Leonardo, BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 40 GB, Quad-rail NVIDIA HDR100 InfiniBand 1,824,768 0.241 10 1.8 7.6
8 CII, Institute of Science, Japan TSUBAME 4, HPE Cray XD665, AMD EPYC 9654 96C 2.4GHz, NVIDIA H100 SXM5 94 GB, Mellanox NDR200 172,800 0.025 46 0.64 25.0
9 NVIDIA, USA Selene, DGX SuperPOD, AMD EPYC 7742 64C 2.25 GHz, Mellanox HDR, NVIDIA A100 555,520 0.063 30 0.63 9.9
10 DOE/SC/LBNL/NERSC, USA Perlmutter, HPE Cray EX235n, AMD EPYC 7763 64C 2.45 GHz, Slingshot-10, NVIDIA A100 761,856 0.079 25 0.59 7.5

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.