Over the last year, the greenest supercomputers in the world more than doubled their energy efficiency – the biggest jump since the Green500 started ranking these systems more than a decade ago. If such a pace can be maintained, exascale supercomputers operating at less than 20 MW will be possible in as little as two years. But that’s a big if.
For all the supercomputing trends revealed on recent TOP500 lists, the most worrisome is the decline in performance growth that has taken place over the over the last several years – worrisome not only because performance is the lifeblood of the HPC industry, but also because there is no definitive cause of the slowdown.
FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.— The 49th edition of the TOP500 list was released today in conjunction with the opening session of the ISC High Performance conference, which is taking place this week in Frankfurt, Germany. The list ranks the world’s most powerful supercomputers based on the Linpack benchmark and is released twice per year.
Fujitsu Laboratories has developed a circuit technology that is said to improve the energy efficiency of deep learning workloads. According to the company, it plans to commercialize the technology in 2018 as part of its Human Centric AI Zinrai initiative.
Although Google’s Tensor Processing Unit (TPU) has been powering the company’s vast empire of deep learning products since 2015, very little was known about the custom-built processor. This week the web giant published a description of the chip and explained why it’s an order of magnitude faster and more energy-efficient than the CPUs and GPUs it replaces.