All changes are from November 2006 to June 2007:
While the No. 1 system is still unchallenged, the rest of the TOP10 experienced large changes since November 2006.
The new and previous No. 1 is DOE's IBM BlueGene/L system, installed at DOE’s Lawrence Livermore National Laboratory (LLNL) with a Linpack performance of 280.6 TFlop/s.
The upgraded Cray XT4/XT3 system at DOE’s Oak Ridge National Laboratory is the third system ever recorded to exceed the 100 TFlop/s mark. It is No2 with 101.7 Tflop/s.
It ever so slightly edged out Sandia’s Cray Red Storm system, which holds the No. 3 spot with 101.4 TFlop/s.
Two new BlueGene/L systems entered the TOP10. They are both located in the state of New York and represent the largest academic supercomputer installations.
The No. 5 system is installed at Stony Brook, NY at the New York Center for Computational Science (NYCCS)http://www.sunysb.edu/nyccs/.
The No. 7 system at the Rensselaer Polytechnic at the Computational Center for Nanotechnology Innovations (CCNI), Try, NYhttp://www.rpi.edu/research/ccni/.
The new No.8 system was build by Dell and is installed at NCSA in Illinois. It was measured at 62.68 TFlop/s.
Just behind on No. 9 is the largest system in Europe, an IBM JS21 cluster installed at the Barcelona Supercomputing Center, with performance of 62.63 TFlop/s. It held the No 5 spot just 6 month ago.
The No. 10 was captured by a new SGI system installed in Germany at the Leibniz Computing Center in Munich. It was measured with 56.52 TFlop/s
The first Japanese system is at No. 14. It is a cluster integrated by NEC based on Sun Fire X4600 with Opteron processors, ClearSpeed accelerators and an InfiniBand interconnect, installed at the Tokyo Institute of Technology.
The entry level to the list moved up to the 4.005 TFlop/s mark on the Linpack benchmark, compared to 2.737 TFlop/s six months ago.
The last system on the list would have been listed at position 216 in the last TOP500 just six months ago. This is the largest turnover rate ever seen in the 15 years of the TOP500 project.
Total accumulated performance has grown to 4.92 PFlop/s, compared to 3.54 PFlop/s six months ago and 2.79 PFlop/s one year ago.
The entry point for the top 100 increased in six months from 6.65 TFlop/s to 9.29 TFlop/s.
A total of 289 systems (57.8 percent) are now using Intel processors. This is slightly up from six month ago (261 systems, 52.5 percent) and a represents a typical fraction recently seen for Intel chips in the TOP500.
The AMD Opteron family, which passed the IBM Power processors six month ago, remained the second most common processor family with 105 systems (21 percent) down from 113 systems (22.6 percent) six month ago. 85 systems (17 percent) use IBM Power processors down from 93 systems (18.6 percent) six month ago.
Dual core processors are the dominant chip architecture. The most impressive growth showed the number of systems using the Intel Woodcrest dual core chips which grew in six month from 31 to 205.
Another 90 systems use Opteron dual core processors up from 75 six month ago.
373 systems are labeled as clusters, making this the most common architecture in the TOP500 with a stable share of 74.6 percent.
InfiniBand technology is strongly increasing its share to 127 systems up from 78 six months ago. But Gigabit Ethernet is still the most used internal system interconnect technology (207 systems, down from 211 six month ago).
For quite some time, IBM and Hewlett-Packard sell the bulk of systems at all performance levels of the TOP500.
IBM was ahead of HP since June 2004 but has lost the lead in the number of system this time with 38.4 percent (down from 47.2) compared to HP with 40.6 percent (up from 31.6).
IBM remains the clear leader in the TOP500 list in performance with 41.9 percent of installed performance (down from 49.5) compared to HP with 24.5 percent (up from 16.5).
In the system category again no other manufacturer could break the 5 percent barrier, but Dell got very close with 4.8 percent.
In the performance category the manufacturers with more than 5 percent are: Dell (9 percent of performance), Cray (7.3 percent of performance), and SGI (5.7 percent), each of which benefit from large systems in the TOP10.
IBM (82) and HP (181) sold together 263 out of 269 systems at commercial and industrial customers and have this important market segment clearly cornered.
The U.S. is clearly the leading consumer of HPC systems with 281 of the 500 systems. The European share (127 systems up from 95) recovered and is again larger then the Asian share (72 down from 79 systems).
Dominant countries in Asia are Japan with 23 systems (down from 30) and China with 13 systems (down from 18).
In Europe, UK has established itself as the No. 1 with 43 systems (32 six months ago). Germany has to live with the No. 2 spot with 24 systems (19 six month ago).
The entry level into the TOP50 is at 15.8 TF/s
The U.S. has about the same percentage of systems (58 percent) in the TOP50 than in the TOP500 while Japan has an increased share of 12 percent.
The dominant architectures are custom build massively parallel systems MPPs with 60 percent ahead of commodity clusters with 36 percent.
IBM leads the TOP50 with 46% of systems and 49 percent of performance.
No 2 is DELL with 18 percent of systems and 15.8 percent of performance.
Cray is third with 10 percent of systems and 13.9 percent of performance closely followed by SGI with 10 percent of systems and 8.4 percent of performance.
HP is currently absent from the TOP50.
50 percent of systems are installed at research labs and 38 percent at universities.
There is only a single system using Gigabit Ethernet in the TOP50.
IBM’s BlueGene is the most used system family with 13 systems (26 percent).
IBM’s Power processors are used in 46 percent of system ahead of intel processors in 40 percent and AMD in 12 percent.
The average concurrency level is 11,300 cores per system.
The average age of a TOP50 system is only about 1 year and 4 month. 48% have been installed or upgraded this year and 34 percent last year.
|1||Titan - Cray XK7 , Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x|
|2||Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom|
|3||K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect|
|4||Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|5||JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect|
|6||SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR|
|7||Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi|
|8||Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050|
|9||Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|10||DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect|