Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, has taken the #1 spot with a performance of 122.3 petaflop/s (Pflop/s). Sierra, a very similar system at the Lawrence Livermore National Laboratory, CA, USA has taken #3. These two system took also the first two spots on the HPCG benchmark.
Due to Summit and Sierra, the USA took back the lead as consumer of HPC performance with 38.2% of the overall installed performance while China is second with 29.1% of the overall installed performance.
For the first time ever the leading HPC manufacturer is not a US company. Lenovo took the lead with 23.8 percent of systems installed. It is followed by HPE with 15.8 percent, Inspur with 13.6 percent, Cray with 11.2 percent, and Sugon with 11 percent.
Highlights from the Top 10
After having seen little changes at the top for some time, four of the first five systems on the new TOP500 are either new or substantially upgraded compared to last November.
After not being represented in the TOP 3 for one year, the USA claimed the #1 and #3 spot with two new systems. The top systems installed in China are now at #2 and #4, the top system in Japan is #5, and the first system in Europe is now at #6.
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, has taken the #1 spot with a performance of 122.3 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are hooked together with a Mellanox dual-rail EDR InfiniBand network.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province was in the lead for the last 2 years, but was pushed to the #2 position with 93 Pflop/s. It is the only system in the TOP 5, which is not new or upgraded since the last list.
Sierra is a new system at the Lawrence Livermore National Laboratory, CA, USA listed at #3. It’s architecture is very similar to the new #1 systems Summit. It is build with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 71.6 Pflop/s.
Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China was upgraded by replacing the Xeon PHI accelerators with the new proprietary Matrix-2000 chips. It is now the No. 4 system with 61.4 Pflop/s.
The new AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) and listed as #5 with a performance of 19.88 Pflop/s. The Fujitsu build system is using Xeon Gold processors with 20 cores and the NVIDIA Tesla V100 as well.
The No. 6 is the Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland and the most powerful system in Europe. The system has a total of 361,760 cores.
Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and previously the largest system in the USA is now the No.7 system. It achieved 17.59 Pflop/s using 261,632 of its NVIDIA K20x accelerator cores.
Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory, is the No. 8 system. It was first delivered in 2011 and has achieved 17.17 Pflop/s using 1,572,864 cores.
Trinity a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories and located at Los Alamos has 940,800 cores and achieved 14.1 Pflop/s, which puts it at the No. 9 position.
Cori, a Cray XC40 supercomputer comprised of 1,630 Intel Xeon "Haswell" processor nodes, 9,300 Intel Xeon Phi 7250 ("Knight's Landing") nodes entered the TOP500 in November 2016 and is now at No. 10 with 14.01 Pflops/s using 622,336 cores.
Highlights from the Overall List
The number of systems in the USA continues to fall further to a new record low of 124, down from 145 six months ago.
The number of systems installed in China remains at record levels with now 206, compared to 202 on the last list. China retains its substantially larger number of installations than the USA.
The USA did manage to take the lead back from China in the performance category. Systems installed in the USA now contribute 38.2% of the overall installed performance while the China is second with 29.1% of the overall installed performance. These numbers are a reversal compared to six months ago.
There are 273 systems, more than half of the list, with performance greater than a Pflop/s on the list, up from 181 six months ago.
In the Top 10, the No.1 system, Summit, the No. 3 Sierra, and the No. 5 ABCI use NVIDIA Volta GPUs to achieve their performance. The No. 6 system Piz Daint and the No. 7 system Titan are using other NVIDIA GPUs to accelerate computation.
Accelerators are used in 110 TOP500 systems, a slight increase from the 101 accelerated systems in the November 2017 lists. NVIDIA GPUs are present in 98 of these systems, including five of the top 10: Summit, Sierra, ABCI, Piz Daint, and Titan. Seven systems are equipped with Xeon Phi coprocessors; PEZY accelerators are used in four systems; and the Matrix-2000 coprocessor is used in a single machine, the upgraded Tianhe-2A. An additional 20 systems use Xeon Phi as the main processing unit.
The average number of accelerator cores for these 110 systems is 145,200 cores/system.
Intel continues to provide the processors for the largest share (95.0 percent) of TOP500 systems.
97.8 percent of the systems use main processors with eight or more cores, 82.4 percent use twelve or more cores, and 53.2 percent 16 or more cores.
We have incorporated the HPCG benchmark results into the Top500 list to provide a more balanced look at performance.
The two new DOE systems Summit at ORNL and Sierra at LLNL also grabbed the first 2 positions on the HPCG benchmark. Summit achieved 2.93 HPCG-Pflop/s and Sierra 1.80 HPCG-Pflop/s. They are followed by the previou leader Fujitsu’s K computer, which is ranked #16 in the overall TOP500.
General highlights from the TOP500 since the 50th edition
The entry level to the list moved up to the 716 Tflop/s mark on the Linpack benchmark, compared to 548 Tflop/s six months ago.
The last system on the newest list was listed at position 372 in the previous TOP500. This turnover is in line with what has been seen during the last four years, but much lower than previous levels.
Total combined performance of all 500 systems has for the first time exceeded the Exaflop barrier with now 1.22 exaflop/s (Eflop/s), compared to 845 Pflop/s six months ago and 749 Pflop/s one year ago. This increase in installed performance is well below the previous long-term trend we had seen until 2013.
The entry point for the TOP100 increased in six months to 1.71 Pflop/s, up from 1.28 Pflop/s.
The average concurrency level in the TOP500 is 116,100 cores per system, down from 138,000 six months ago but up from 96,160 one year ago. This decrease was caused by the delisting of the ExaScaler Gyoukou system, which had set a record with almost 20 million cores last time.
A total of 476 systems (95.2 percent) are now using Intel processors, slightly up from 94.2 percent six months ago.
The share of IBM Power processors is now at 13 systems, down from 14 systems six months ago.
10G Ethernet (or faster) is now used in 247 systems (up from 228). InfiniBand technology is now found on 139 systems, down from 163 systems, and is the second most-used internal system interconnect technology.
Intel Omni-Path technology is now at 38 systems, up from 35 systems six months ago.
For the first time the leading HPC manufacturer is not from the USA. Lenovo took the lead with 23.8 percent of systems installed. It is followed by HPE with 15.8 percent, Inspur with 13.6 percent, Cray with 11.2 percent, and Sugon with 11 percent.
Lenovo increased from 81 system six month ago to 119 systems, HPE decreased from 122 systems to 79. Inspur rose from 56 systems to 68.
Cray now has 56 systems, a comparable number to the last few years.
Sugon features 55 systems in the list, up from 51.
IBM follows with 19 systems remaining under their label.
IBM took the lead as manufacturer in the performance section. Due to the Summit and Sierra systems, IBM now contributes 19.9 percent of all performance in the list.
Cray follows with a 16.5 percent share of installed total performance (down from 19.5 percent).
Lenovo is now third with 12.0 percent up from 9.1 percent of performance.
HPE follows with 9.9 percent, down from 15.2 percent six months ago.
Thanks to the Sunway TaihuLight system, NRCPC retains the fifth spot with 7.7 percent of the total performance (down from 11.1 percent).
China remains the leading consumer of HPC systems with 206 systems (up from 201), ahead of the USA at 124 systems (down from 143). The European share (101 systems, up from 93 in the last list) is noticeably lower than the Asian share of 261 systems, up from 252 six month ago.
Dominant countries in Asia are China with 206 systems and Japan with 36 systems (up from 35).
In Europe, UK increased to 22 systems, Germany remains at 21 systems, followed by France with 18 systems.
The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at http://top500.org/submit
The top 3 positions in the Green500 are all taken by systems installed in Japan.
The first 3 systems are based on the ZettaScaler-2.2 architecture, while all other system in the top10 use NVIDIA GPUs.
The most energy-efficient system and No. 1 on the Green500 is again the Shoubu system B, a ZettaScaler-2.2 system at the Advanced Center for Computing and Communication, RIKEN, Japan. It was remeasured and achieved 18.4 GFlops/Watt power-efficiency during its 858 Tflop/s Linpack performance run. It is listed on position 362 in the TOP500.
No. 2 in the Green500 is the Suiren2 system at the High Energy Accelerator Research Organization/KEK, Japan. This ZettaScaler-2.2 system achieved 16.8 GFlops/Watt. It is listed on position 421 in the TOP500.
No. 3 in the Green500 is the Sakura system installed the manufacturer of the system PEZY Computing K.K., Japan. It achieved 16.7 GFlops/Watt. It is listed on position 388 in the TOP500.
The fourth position is the DGX SaturnV Volta system, a NVIDIA system installed at NVIDIA, USA. It achieved 15.1 GFlops/Watt power efficiency. It is on position 228 in the TOP500.
The fifth position is Summit installed at Oak Ridge National Laboratory. It achieved 13.88 GFlops/Watt power efficiency. It is on position 1 in the TOP500.
The sixth position was taken by the TSUBAME 3.0 system at the GSIC center at the Tokyo Institute of Technology Japan. It uses NVIDIA GPUs to achieve 13.6 GFlops/Watt power efficiency.
They are followed by the AIST AI Cloud system, the AI Bridging Cloud Infrastructure (ABCI) system, the new IBM systems MareNostrum P9 (Spain), Summit (USA), Wilkes-2 (UK) all of which use various NVIDIA GPUs as well.
The Top500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
The two new DOE systems Summit at ORNL and Sierra at LLNL grabbed the first 2 positions on the HPCG benchmark. Summit achieved 2.93 HPCG-Pflop/s and Sierra 1.80 HPCG-Pflop/s.
They are followed by the previou leader Fujitsu’s K computer, which is ranked #16 in the overall TOP500.
About the TOP500 List
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.
The TOP500 list is compiled by Erich Strohmaier and Horst Simon of Lawrence Berkeley National Laboratory; Jack Dongarra of the University of Tennessee, Knoxville; and Martin Meuer of ISC Group, Germany.