A set of academic articles recently published on post-exascale supercomputing paint a picture of an HPC landscape that will be fundamentally different from the one we now inhabit. But the writeups avoid one obvious conclusion.
IBM researchers claim they have come up with a much more efficient model for processing neural networks, using just 8 bits for training and only 4 bits for inferencing. The research is being presented this week at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS).
Having moved on from beating up on world champion Go players, DeepMind has developed an artificial intelligence system that just captured top honors in a protein folding prediction competition. Known as AlphaFold, the technology has been two years in the making.
Amazon Web Services (AWS) has launched two new HPC cloud instances that support 100Gbps networking, as well as a network interface that supports MPI communication that can scale to tens of thousands of cores.
At SC18, Depei Qian delivered a talk where he revealed some of the beefier details of the three Chinese exascale prototype systems installed in 2018. The 45-minute session confirmed some of the speculation about these machine that we have reported on, but also offered a deeper dive into their design and underlying hardware elements.