Highlights - June 2017

Highlights from the Top 10

  • Only for the second time ever there is no system from the USA under the TOP3.  #1 and #2 are installed in China and a newly upgraded system in Switzerland now at #3 pushed the top US system to #4.  In November 1996 three Japanese systems occupied the top 3 spots but at all other times there was at least one US system among the top 3 systems.
  • Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province maintains the lead as the No. 1 system with 93 petaflop/s (Pflop/s).
  • Tianhe-2 (Milky Way-2), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China is now the No. 2 system with 33.86 Pflop/s. Tianhe-2 was the No.1 system in the TOP500 list for the past 3 years (6 lists)
  • The new No. 3 is the upgraded Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland and the most powerful system in Europe.  Due to it’s upgrade Piz Daint jump from the previous 9.77 Pflop/s to 19.59 Pflop/s using NVIDIA Tesla P100. The system has now a total of 361,760 cores.
  • Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and the largest system in the U.S.A. is now the No.4 system. It achieved 17.59 Pflop/s using 261,632 of its NVIDIA K20x accelerator cores.
  • Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory, is now the No. 5 system. It was first delivered in 2011 and has achieved 17.17 Pflop/s using 1,572,864 cores.
  • Cori, a Cray XC40 supercomputer comprised of 1,630 Intel Xeon "Haswell" processor nodes, 9,300 Intel Xeon Phi 7250 ("Knight's Landing") nodes entered the TOP500 in November 2016 and is now at No. 6 with 14.01 Pflops/s using 622,336 cores.
  • Oakforest-PACS, a Fujitsu PRIMERGY CX1640 M1 installed at Joint Center for Advanced High Performance Computing in Japan is powered by Intel Xeon Phi 7250  nodes and Intel Omni-Path interconnect technology is at No. 7 with 13.55 PFlop/s using using 558,144 cores.
  • Fujitsu’s K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, is the No. 8 system with 10.51 Pflop/s using 705,024 SPARC64 processing cores.
  • Mira, a BlueGene/Q system installed at DOE’s Argonne National Laboratory, is No. 9 with 8.59 Pflop/s using 786,432 cores.
  • Trinity, a Cray X40 system installed at DOE/NNSA/LANL/SNL is now No. 10 with 8.1 Pflops/s and 301,056 cores.

Highlights from the Overall List

  • The number of system in the USA decreased very little to 169 from 171 six month ago.
  • The number of systems installed in China decreased to 159, compared to 171 on the last list.
  • China and the USA are neck-and-neck in the performance category with the USA holding 33.5% of the overall installed performance while China is second with 31.2% of the overall installed performance.
  • There are 138 systems with performance greater than a Pflop/s on the list, up from 117 six months ago.
  • In the Top 10, the No. 2 system, Tianhe-2, the No. 6 Cori, and the No. 7 Oakforest-PACS uses Intel Xeon Phi processors to speed up their computational rate. The No. 3 system Piz Daint and the No. 4 system Titan are using NVIDIA GPUs to accelerate computation.
  • A total of 91 systems on the list are using accelerator/co-processor technology, up from 86 on November 2016.  71 of these use NVIDIA chips, 14 systems with Intel Xeon Phi technology (as Co-Processors), one uses ATI Radeon, and two are using PEZY technology. Three systems use a combination of Nvidia and Intel Xeon Phi accelerators/co-processors. An additional 13 Systems now use Xeon Phi as the main processing unit.
  • The average number of accelerator cores for these 91 systems is 115,000 cores/system.
  • Intel continues to provide the processors for the largest share (92.8 percent) of TOP500 systems.
  • Ninety-three (93.0) percent of the systems use processors with eight or more cores, sixty-eight (68.6) percent use twelve or more cores, and twenty-seven (27.2) percent eighteen or more cores.
  • We have incorporating the HPCG benchmark results into the Top500 list to provide a more balanced look at performance.
  • The fastest system on the HPCG benchmark is Fujitsu’s K computer which is ranked #8 in the overall TOP500. It is followed closely by Tianhe-2 which is also No. 2 on the TOP500.

 

General highlights from the TOP500 since the 48th edition

  • The entry level to the list moved up to the 432.2 Tflop/s mark on the Linpack benchmark, compared to 349.3 Tflop/s six months ago.
  • The last system on the newest list would have been listed at position 391 in the previous TOP500.  This turnover is the same as for the last list.
  • Total combined performance of all 500 systems has grown to 749 Pflop/s, compared to 672.2 Pflop/s six months ago and 566.8 Pflop/s one year ago. This increase in installed performance follows a growth rate equal to Moore’s Law, which is well below the previous long-term trend we had seen until 2013.
  • The entry point for the TOP100 increased in six months to 1.21 Pflop/s, up from 1.07 Pflop/s.
  • The average concurrency level in the TOP500 is 96,160 cores per system, up from 87,990 six months ago and 81,995 one year ago.

Vendor Trends

  • A total of 464 systems (92.8 percent) are now using Intel processors, slightly up from 92.4 percent six months ago.
  • The share of IBM Power processors is now at 21 systems, down from 22 systems six months ago.
  • The AMD Opteron family is used in 6 systems, down from 7 systems on the previous list.
  • Gigabit Ethernet is now at 207 systems (unchanged), in large part thanks to 194 systems now using 10G interfaces.  InfiniBand technology is now found on 178 systems, down from 187 systems, and is the second most-used internal system interconnect technology.
  • Intel Omni-Path technology which made its first appearance one year ago with 8 systems is now at 38 systems up from 28 system six month ago.
  • HPE has the lead in systems and now has 143 systems (28.6 percent).  This count for HPE includes 25 systems originally installed by former SGI.  HPE had 140 systems six months ago.  Lenovo follows with 88 systems down from 92 systems. Cray now has 57 systems, up from 56 systems six month ago. Sugon features 44 systems in the list.  IBM is now 5th in the systems category with 30 systems. Only one of these IBM system was new in this list.

Performance Trends

  • Cray continues to be the clear leader in the TOP500 list in performance and has a considerable lead with a 21.4 percent share of installed total performance (up from 21.3 percent).
  • Due to SGI based systems, HPE is second with 16.6 percent, up from 15.8 percent six months ago.
  • Thanks to the  Sunway TaihuLight system, NRCPC retains the third spot with 12.5 percent of the total performance (down from 13.8 percent).
  • Lenovo is fourth with 9.9 percent of performance.
  • IBM is in the fifth spot with 8.0 percent share.
  • Thanks to Tianhe-2 and Tianhe-1A, NUDT contributes 5.2 percent of the total performance of the list, down from 5.8 percent.

Geographical Observations

  • The U.S. is again the leading consumer of HPC systems with 169 systems (down from 171) ahead of China at 160 systems (down from 171). The European share (105 systems, same as in the last list) is noticeable lower than the Asian share of 210 systems, down from 213 in November 2016.
  • Dominant countries in Asia are China with 160 systems and Japan with 33 systems (up from 27).  
  • In Europe, Germany is the clear leader with 28 systems followed by France with 18 and the UK with 17 systems.  

Green500

  • The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at  http://top500.org/submit
  • The top 4 positions in the Green500 are all taken by newly installed systems in Japan and #5 is capture by Piz Daint. All of these systems are using NVIDIA Tesla P100‘s to achieve their efficiency.
  • The most energy-efficient system and #1 on the Green500 is the new Tsubame 3.0, a modified HPE ICE XA System at the GSIC Center, Tokyo Institute of Technology, Japan. It uses NVIDIA Tesla P100 SXM2 and achieved 14.110 GFlops/Watt power-efficiency during its 1.998 Pflop/s Linpack performance run.  It is listed on position 61 in the TOP500.
  • #2 in the Green500 is the kaiku system at the Yahoo Japan Corporation, Japan.  This system from ExaScaler uses NVIDIA Tesla P100 to achieve 14.045 GFlops/Watt. With this value it is only by 0.3% behind the #1 TSUBAME 3.0 system. It is listed on position 466 in the TOP500.
  • #3 in the Green500 is the AIST AI Cloud system at the National Institute of Advanced Industrial Science and Technology, Japan.  This system from NEC uses NVIDIA Tesla P100 SXM2 as well to achieve 12.68 GFlops/Watt. It is listed on position 148 in the TOP500.
  • #4 is the RAIDEN GPU system, a Fujitsu System at the Center for Advanced Intelligence Project, RIKEN, Japan using NVIDIA Tesla P100 to achieve 10.6 GFlops/Watt power efficiency. It is on position 306 in the TOP500.
  • #5 is the Piz Daint system, a Cray XC50 system in Switzerland. It conducted a power-optimized run of the Linpack benchmark which achieved 10.4 GFlops/Watt. It is listed at position #3 in the TOP500.

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.

The TOP500 list is compiled by Erich Strohmaier and Horst Simon of Lawrence Berkeley National Laboratory; Jack Dongarra of the University of Tennessee, Knoxville; and Martin Meuer of ISC Group, Germany.