TrueNorth, IBM’s brain-like microprocessor, has been found to be exceptionally proficient at inference work for deep neural networks. In particular, the chip has demonstrated it’s especially good at image recognition, being able accurately classify such data much more efficiently, from an energy perspective, than traditional processor architectures, suggesting new applications in mobile computing, IoT, robotics, autonomous cars, and HPC.
BenevolentAI, a London-based artificial intelligence company specializing in health and bioscience applications, has acquired NVIDIA’s DGX-1, a deep learning system accelerated by eight Tesla P100 GPUs. The company plans to use the $129,000 machine to advance its work in drug discovery and related biomedical research.
Researchers at the University of Basel in Switzerland have used machine learning to predict the thermodynamic characteristics of 90 new mineral compounds with potential commercial use. The machine learning models were able to predict the chemical stability of all possible iterations of a particular type of class of crystals several orders of magnitude faster than if the researchers had relied on quantum mechanical calculations.
The path to exascale computing hasn’t been an easy one. It has had to face a daunting set of challenges in energy efficiency, application parallelism, and system reliability, just to name a few. The difficulties in bringing the hardware and software up to this level is considerable, but there is a more fundamental challenge at the heart of exascale: doing the necessary work of building an ecosystem that will last for a decade or more, not just for a handful stunt machines.