All changes are from November 2004 to June 2005:
The list shows again a major shake-up of the TOP10
Only half of the TOP10 systems from November 2004 are still large enough to hold on to a TOP10 position, five new systems entered it.
The new and previous #1 is DOE's IBM BlueGene/L system now installed at DOE's Lawrence Livermore National Laboratory (LLNL). It has doubled in size and has now achieved a Linpack performance of 136.8 TFlop/s.
The new #2 is a second IBM eServer Blue Gene Solution system, installed at IBM's Thomas Watson Research Center (Press Release) with 91.20 TFlop/s Linpack performance.
The Columbia system at NASA/Ames built by SGI slipped to the #3 spot from the #2 spot, which it had gained just six month ago, with an equally impressive 51.87 TFlop/s.
The Earth Simulator, built by NEC and which held the #1 spot for five lists, is now #4.
The #5 spot was barely captured by the upgraded MareNostrum system at the Barcelona Supercomputer Center. It is an IBM BladeCenter JS20-based system with a Myrinet connection network and achieved 27.91 TFlop/s - just ahead of a third Blue Gene system wned by ASTRON and installed at the University of Groningen with 27.45 TFlop/s.
The #10 spot was captured by an early measurement of Cray's Red Storm System at Sandia National Laboratories with 15.25 TFlops/. This is also the new entry level for the TOP10 up from just under 10 TFlop/s Linpack performance six months ago.
As predicted several years ago, only systems exceeding the 1 TFlop/s mark on the Linpack were able to enter the list.
The last system on the list - with 1.166 TFlop/s - would have been listed at position 299 in the last TOP500 just six months ago. This exemplifies the continuous rapid turnover of the TOP500.
The last system (#500) in June 2005 has about the same compute power as ALL 500 systems combined, when the list was first created 13 years ago in June 1993.
Total accumulated performance has grown to 1.69 PFlop/s, compared to 1.127 PFlop/s six months ago.
Entry level is now 1.166 TFlop/s, compared to 850 Gflop/s six months ago.
The entry point for the top 100 moved from 2.026 TFlop/s to 3.412 TFlop/s.
A total of 333 systems are now using Intel processors. Six months ago there were 320 Intel-based systems on the list and one year ago only 287.
The second most common processor family is the IBM Power processor (77 systems), ahead of PA RISC processors (36) and AMD processors (25).
304 systems are labeled as clusters, making this the most common architecture in the TOP500.
At present, IBM and Hewlett-Packard sell the bulk of systems at all performance levels of the TOP500.
IBM remains the clear leader in the TOP500 list and increased its lead to 51.8 percent of systems and 57.9 percent of installed performance
HP is second with 26.2 percent of systems and 13.3 percent of performance.
SGI is third with 5 percent of systems and 7.45 percent of performance
No other manufacturer is able to capture more than 5 percent in any category.
The U.S is clearly the leading consumer of HPC systems with 294 of the 500 system installed there. A new geographical trend, which started a few years ago, now emerges more clearly. The number of system in Asian countries other than Japan is rising quite steadily. In this list Japan is listed with 23 systems and all other Asian countries combined have an additional 58 systems. However Europe is still ahead of Asia, with 114 systems installed.
19 of the systems in Asia are installed in China -- up from 17 systems six months ago.
The number of systems installed in the U.S. has increased to 294, up from 267 six months ago.
In Europe, Germany claimed the #1 spot from UK again, with 40 systems compared to 32. Six months ago UK was in the lead with 42 compared to Germany's 35 systems.
|1||Titan - Cray XK7 , Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x|
|2||Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom|
|3||K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect|
|4||Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|5||JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect|
|6||SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR|
|7||Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi|
|8||Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050|
|9||Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|10||DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect|