By: Michael Feldman
The popularity of NVIDIA’s Tesla V100 GPUs got another boost this week with IBM’s announcement that it has added them to its cloud offerings.
According to a blog post penned by John Considine, GM, Cloud Infrastructure Svs, IBM Watson and Cloud Platform, the new GPUs are now available in the company’s bare metal servers. As with their previous GPU cloud offerings – the P100, K80, and M60 – IBM is gunning primarily for users looking to accelerate their HPC and AI/machine learning workloads.
Considine mentions a couple of cloud-based use cases that employed the older P100 hardware. The first example is from NASA Frontier Development Lab, which used the GPUs to speed up 3D modeling of asteroids, based on radar data. “With an average of 35 new asteroids and near-Earth objects discovered each week, there is currently more data available than experts can keep up with, and existing 3D modeling processes can take several months,” he writes. Using the P100 cloud servers provided a five-fold to six-fold speedup, he says. The second use case was from a medical device company called SpectralMD, which used GPU acceleration to train and test deep learning models to help select treatment options for wounds. According to Considine, SpectralMD was able to decrease the cross-validation of the models from weeks to hours using the P100 gear.
The V100 GPUs are considerably more powerful than these P100s – about 50 percent faster on double precision floating point math and several times faster on deep learning calculations. Initially at least, IBM is employing the PCIe-based V100 devices, which top out at 7 teraflops on double precision, 14 teraflops on single precision, and 112 teraflops for deep learning/mixed precision. The NVLink-based V100s are marginally faster and more expensive, but since IBM isn’t currently offering a Power8/9-V100 combo or server configurations with lots of GPUs, the faster data transfers enabled by NVLink doesn’t offer much of an advantage.
A bare metal IBM cloud server equipped with a single V100, and two 16-core Xeon CPUs (E5-2640 v4) can be rented for as little as $1,819 per month, $900 of which is for the GPU itself. Opting for a second GPU costs an additional $900. Considering that IBM’s starting price of its P100 bare metal server is 1,569 per month, the V100 servers could be a bargain if your application can take advantage of the extra flops. If it can’t or can only squeeze out a fraction of the V100’s theoretical performance, the P100 could be better option, price-performance-wise.
IBM is not alone in offering NVIDIA’s newest silicon in its cloud. Amazon, Microsoft, and Baidu have all adopted the V100 for public cloud service (with Google still a holdout). The fact that NVIDIA has done so much groundwork with CUDA and its machine learning stack to make its GPUs accessible to users, makes these upgrade choices a lot easier for these cloud providers. And as long as NVIDIA GPUs remain the accelerator-of-choice for both HPC and AI applications, those choices will continue to be easy.
Image credit: Connie Zhou for IBM