By: Michael Feldman
NVIDIA had a trio of announcements at this week’s supercomputing conference (SC16), all of which revolved around the company’s activities in artificial intelligence. Although the it was certainly not the only company touting its devotion to AI at this year’s event, NVIDIA seems to have a singular focus on the topic these days.
Probably the most unique bit of news from the GPU-maker was its first appearance on the latest TOP500 list as a both a supercomputer site and system provider. NVIDIA’s in-house DGX SATURNV is now the 28th fastest supercomputer on the planet, with a Linpack mark of 3.3 petaflops. The system is composed of 124 NVIDIA DGX-1 servers, each of which holds eight of the company’s new P100 GPUs. The DGX-1 servers are connected via Mellanox’s EDR InfiniBand fabric.
Perhaps more impressive is DGX SATURNV’s efficiency. While running Linpack, it delivers 9.46 gigaflops per watt, which easily earned it the top spot on the new Green500 list. Only the similarly equipped Piz Daint supercomputer at the Swiss National Supercomputing Centre (CSCS), was in the same territory, with a mark of 7.45 gigaflops per watt.
The DGX SATURNV is being used to help NVIDIA compose the software for DRIVE PX-2, the company’s second-generation autonomous vehicle platform. The system is also being used to analyze chipset designs. Or as the company website says, “we’re using GPUs to help us design GPUs.”
At a press gathering at SC16, NVIDIA CEO Jen-Hsun Huang hinted that the DGX SATURNV would also be used to help build the Cancer Distributed Learning Environment, aka CANDLE, an AI framework being developed for accelerated cancer research. The CANDLE effort will involve a collaboration between NVIDIA, the National Cancer Institute, the US Department of Energy and several national laboratories.
CANDLE is part of the Cancer Moonshot initiative announced earlier this year by the Obama administration, which aims to speed cancer prevention, diagnosis, and treatment by bringing together the latest technologies in life science and computer science. The stated goal is to bring ten years’ worth of cancer R&D to fruition in just five years.
NVIDIA’s third announcement at the conference involves a collaboration with Microsoft on its Cognitive Toolkit (formerly known as CNTK), a software framework for deep learning. Microsoft recently released a beta version of the software for early adopters and developers. The software-maker claims better performance and scalability for its toolkit compared to other popular frameworks like Caffe, TensorFlow, Torch, and Theano.
The Cognitive Toolkit has been optimized to run on NVIDIA Tesla GPUs, including the new P100 processors. So the framework can be used on NVIDIA’s DGX-1 platform and Microsoft’s Azure N-series virtual machines. The latter currently use NVIDIA K80 GPUs. Cray also announced it has validated the toolkit for GPU-accelerated XC and CS systems.