Since June 2019 only Petaflop systems have been able to make the list. The total aggregate performance of all 500
system has now
risen to 1.65 Exaflops.
Two IBM build systems called Summit and Sierra and installed at DOE’s Oak Ridge National Laboratory (ORNL) in
Tennessee and Lawrence Livermore National Laboratory in California kept the first two positions in the TOP500 in
The share of installations in China continues to rise strongly. 45.6 % of all
system are now listed as being installed in China. The share of system listed in the USA remains near it's all
at 23.4 %.
However, systems in the USA are on average larger, which allowed the USA
(37.1%) to stay close to China
(32.3%) in terms of installed performance.
There were no changes to the top of the list at all. The first new system shows up only at position 24! It is an
IBM Power based system utilizing NVidia Volta GV100 which allowed it to capture the No 3 spot on the Green500
Highlights from the Top 10
Summit and Sierra kept the #1 and #2 spot in the USA
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains at the
#1 spot with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500
list. Summit has 4,356 nodes, each one housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100
GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail
EDR InfiniBand network.
Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA stayed at #2. It’s architecture
is very similar to the new #1 systems Summit. It is build with 4,320 nodes with two Power9 CPUs and four
NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer
Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in
China's Jiangsu province was in the lead for the first 2 years of its life, and is now listed at the #3
position with 93 Pflop/s.
Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT)
and deployed at the National Supercomputer Center in Guangzho, China remained the No. 4 system with 61.4
Frontera, a Dell C6420 system was installed at the Texas Advanced Computing Center of the University of
Texas earlier this year and is listed at No. 5. It achieved 23.5 Pflop/s using 448,448 of its intel Xeon
The No. 6 is the Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre
(CSCS) in Lugano, Switzerland and the most powerful system in Europe.
Trinity a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories
and located at Los Alamos improved it’s performance to 20.2 Pflop/s, which puts it at the No. 7
The AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced
Industrial Science and Technology (AIST) and listed as No. 8 with a performance of 16.9 Pflop/s. The Fujitsu
build system is using Xeon Gold processors with 20 cores and the NVIDIA Tesla V100 as well.
SuperMUC-NG is the next-generation high-end supercomputer at the Leibniz-Rechenzentrum (Leibniz
Supercomputing Centre) in Garching near Munich. With more than 311,040 cores and a HPL performance of 19.5
PFLop/s it is listed at No. 9.
The system Lassen at No 10 is an IBM Power System with NVIDIA Tesla V100 accelerators with a performance
of 18.3 PFlop/s.
Highlights from the List
A total of 145 systems on the list are using accelerator/co-processor technology,
up from 134 six months ago.
94 of these use NVIDIA Volta
chips, 30 use Nvidia Pascal, and there are
now 9 systems with Nvidia Kepler.
Intel continues to provide the processors for the largest share (94.20 percent) of
We have incorporated the HPCG benchmark results into the Top500 list to provide a more balanced look at
The 2 top DOE systems Sierra and Summit also lead with respect to HPCG performance.
Japanese systems continue to take leading roles in the Green500. However, the top 2 DOE systems Sierra
Summit also make the top10 in the Green500 and demonstrate the progress in performance efficiency.
The entry level to the list moved up to the
1,142.0 Tflop/s mark on the Linpack
The last system on the newest list was listed at position 399 in the
Total combined performance of all 500 exceeded the Exaflop barrier with
now 1.65 exaflop/s (Eflop/s) up from
1.56 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to
The average concurrency level in the TOP500 is 126,308 cores
per system up from 118,213 six months ago.
Installations by countries:
TOP 10 HPC manufacturer:
TOP 10 Interconnect Technologies:
TOP 10 Processor Technologies:
The data collection and curation of the Green500 project has been integrated with the TOP500 project. This
allows submissions of all data through a single webpage at http://top500.org/submit
The most energy-efficient system and No. 1 on the Green500 is a new Fujitsu A64FX prototype installed at
Fujitsu, Japan. It achieved 16.9 GFlops/Watt power-efficiency during its 2.0 Pflop/s Linpack performance
run. It is listed on position 160 in the TOP500.
In second position is the NA-1 system, a PEZY Computing / Exascaler Inc. system which is currently being
readied at PEZY Computing, Japan for a future installation at NA Simulation in Japan. It achieve 16.3
GFlops/Watt power efficiency. It is on position 421 in the TOP500.
The No 3 on the Green500 is AiMOS, a new IBM Power systems at the Rensselaer Polytechnic Institute Center
for Computational Innovations (CCI), New York, USA. It achieved 15.8 GFlops/Watt and is listed at position
25 in the TOP500.
The Top500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
The two DOE systems Summit at ORNL and Sierra at LLNL grabbed the first 2 positions on the HPCG benchmark.
Summit achieved 2.93 HPCG-Pflop/s and Sierra 1.80 HPCG-Pflop/s.
About the TOP500 List
The first version of what became today’s TOP500 list started as an exercise for a small conference in
Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how
things had changed. About that time they realized they might be onto something and decided to continue compiling
the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.