The US Department of Energy announced that it will funnel $258 million into six tech companies as part of its PathForward program to develop new HPC technologies for exascale supercomputers. The awardees include AMD, Cray, HPE, IBM, Intel, and NVIDIA.
In 1999, Peter Braam introduced Lustre, an open-source parallel file system which went on to become one the most popular software packages for supercomputing. Braam’s success with Lustre was just the start of a career in which he founded five companies and guided technology development in numerous others -- Sun Microsystem, Red Hat, and Xyratex, among them. He’s currently working with Cambridge University on the Square Kilometer Array project and its Science Data Processor effort.
A research team at the Massachusetts Institute of Technology (MIT) has come up with a novel approach to deep learning that uses a nanophotonic processor, which they claim can vastly improve the performance and energy efficiency for processing artificial neural networks.
When assessing the fastest supercomputers in the world, system performance is king, while the I/O componentry that feeds these computational beasts often escapes notice. But a small group of storage devotees working on a project at the Virtual Institute for I/O (VI4IO) want to change that.