IBM researchers claim they have come up with a much more efficient model for processing neural networks, using just 8 bits for training and only 4 bits for inferencing. The research is being presented this week at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS).
Having moved on from beating up on world champion Go players, DeepMind has developed an artificial intelligence system that just captured top honors in a protein folding prediction competition. Known as AlphaFold, the technology has been two years in the making.
Amazon Web Services (AWS) has launched two new HPC cloud instances that support 100Gbps networking, as well as a network interface that supports MPI communication that can scale to tens of thousands of cores.
At SC18, Depei Qian delivered a talk where he revealed some of the beefier details of the three Chinese exascale prototype systems installed in 2018. The 45-minute session confirmed some of the speculation about these machine that we have reported on, but also offered a deeper dive into their design and underlying hardware elements.
The 52nd edition of the TOP500 list saw five US Department of Energy (DOE) supercomputers in the top 10 positions, with the first two captured by Summit at Oak Ridge National Laboratory (ORNL) and Sierra at Lawrence Livermore National Laboratory (LLNL).
AMD has offered a tantalizing preview of “Rome,” the Zen 2 EPYC processor that will offer up to 64 cores and four times the floating point performance as its predecessor. If the claims hold up, the second-generation processor has a shot at being the highest performing datacenter CPU in 2019.
Cray has introduced Shasta, its next-generation supercomputer platform that will serve as the company entry into the realm of exascale computing. The architecture will offer a flexible design that supports a wide array of processors, coprocessors, node configurations, and system interconnects, including one developed by Cray itself.