Researchers from Intel and the University of California, Berkeley have proposed a new category of logic and memory devices that could offer 10 to 100 times the energy efficiency of microprocessors based on complementary metal-oxide-semiconductor (CMOS) technology.
IBM researchers claim they have come up with a much more efficient model for processing neural networks, using just 8 bits for training and only 4 bits for inferencing. The research is being presented this week at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS).
Having moved on from beating up on world champion Go players, DeepMind has developed an artificial intelligence system that just captured top honors in a protein folding prediction competition. Known as AlphaFold, the technology has been two years in the making.
Amazon Web Services (AWS) has launched two new HPC cloud instances that support 100Gbps networking, as well as a network interface that supports MPI communication that can scale to tens of thousands of cores.
Roselectronika, a holding company of Rostec State Corporation, claims to have built a compact mobile supercomputer that can achieve 2.2 petaflops and has a storage capacity of 2.2 petabytes.
NVIDIA continues to rack up wins with HGX-2, the companys 16-GPU server platform for accelerating artificial intelligence, analytics, and HPC.
Episode 252: Addison Snell and Michael Feldman analyze their takeaways from SC18, including news from Atos, Mellanox, and Panasas.
On November 12th at the SC18 International Conference for High Performance Computing, Networking, Storage and Analysis in Dallas, USA, Inspur released its AI super-server AGX-5. Capable of computing deep learning workloads at up to 2 petaflops per second within a single server, AGX-5 is one of the most powerful computers of its kind in the world.
Atos has announced the BullSequana XH2000, which the company is touting as its most flexible and energy efficient HPC platform to date.
At SC18, Depei Qian delivered a talk where he revealed some of the beefier details of the three Chinese exascale prototype systems installed in 2018. The 45-minute session confirmed some of the speculation about these machine that we have reported on, but also offered a deeper dive into their design and underlying hardware elements.