The TOP500 project was launched in 1993 to improve and renew the Mannheim supercomputer statistics, which had been in use for seven years.

Our simple TOP500 approach does not define “supercomputer” as such, but we use a benchmark to rank systems and to decide on whether or not they qualify for the TOP500 list. The benchmark we decided on was Linpack, which means that systems are ranked only by their ability to solve a set of linear equations, A x = b, using a dense random matrix A. Therefore, any supercomputer – no matter what its architecture is – can make it into the TOP500 list, as long as it is able to solve a set of linear equations using floating point arithmetic. We have been criticized for this choice from the very beginning, but now, after 20 years, we can say that it was exactly this choice that has made TOP500 so successful – Linpack therefore was a good choice. And there was, and still is, no alternative to Linpack. Any other benchmark would have been similarly specific, but would not have been so easily available for all systems – a very important factor, as compiling the TOP500 lists twice a year is a very complex process.

One of Linpack’s advantages is also its scalability in the sense that it has allowed us in the past 20 years to benchmark systems that cover a performance range of 12 orders of magnitude. It is true that Linpack delivers performance figures that occupy the upper end of any other application performance. In fact, no other realistic application delivers a better efficiency (Rmax/Rpeak) of a system. But using the peak performance instead of Linpack, which “experts” have often recommended to us, does not make any sense. We have seen a lot of new systems that were not able to run the Linpack test because they were not stable enough. Therefore, running Linpack to measure the performance is kind of a first reliability test for new HPC systems.

The misinterpretation of the TOP500 results has surely led to a negative attitude towards Linpack. Politicians, for example, tend to see a system’s TOP500 rank as a general rank that is valid for all applications, which of course is not true. The TOP500 rank only reflects a system’s ability to solve a linear set of equations, and it does not tell anything about its performance with respect to other applications. Therefore, the TOP500 list is not a tool for selecting a supercomputer system for an organization – centers need to run their own benchmark tests that are relevant to their applications. In this context, an approach such as the “HPC Challenge Benchmark” consisting of seven different benchmarks, which test different parts of a supercomputer, is critical.

The TOP500 lists’ success lies in compiling and analyzing data over time. Despite relying solely
on Linpack, we have been able to correctly identify and track ALL developments and trends over the past 20 years, covering manufacturers and users of HPC systems, architectures, interconnects, processors, operating systems, etc. And above all, TOP500’s strength is that it has proved an exceptionally reliable tool for forecasting developments in performance.

It is very unlikely that another benchmark will replace Linpack as basis for the TOP500 lists in the near future. And in any case we would stick to the concept of a single benchmark because this is the easiest way to trigger competition between manufacturers, countries and sites, which is extremely important for the overall acceptance of the TOP500 lists.