IBM has added TensorFlow, a machine learning software library developed by Google, to its PowerAI distribution. The move is aimed to expand the options for customers interested in enterprise-ready machine learning systems that IBM is targeting with its Power/NVIDIA GPU server gear.
PowerAI is IBM’s software distribution for its S822LC for High Performance Computing platform, which is based on servers equipped with Power8 CPUs and NVIDIA Tesla P100 GPUs. When launched at SC16, PowerAI came with a variety of open source machine learning frameworks, including Theano, Torch, and Caffe, which are among the most widely used in the AI community. The implementations provided in PowerAI are able to take advantage of the S822LC hardware – in particular NVIDIA’s deeplearning-optimized P100 GPUs.
From the standpoint of P100 support, the IBM server is a unique platform. Using the built-in support for NVIDIA’s high-performance NVLink interconnect in the Power8, the SL822LC is the only system on the market able to provide CPU-to-GPU data transfers at speeds 5 to 12 times faster than a conventional PCIe Gen 3 hookup. This capability is useful for many HPC and high performance analytics applications, including the training of neural networks for things like facial identification, language translation, and voice recognition. Training these neural networks is the most computationally intensive part of these applications, and one that is often limited by internal bandwidth available inside the server.
One of the most popular pieces of software to do this training is TensorFlow, a general-purpose deeplearning framework used by AI developers. As an open source software platform, and one that is backed by Google – it was originally developed by the Google Brain Team – TensorFlow has become one of the most widely used such frameworks in the community. In fact, the involvement of Google, which implies robust and long-lived support, is perceived as one of its chief advantages over competing libraries. Such support is especially important to would-be enterprise customers.
In a prepared statement by Rajat Monga, engineering leader for TensorFlow, he highlights this enterprise angle. “TensorFlow is quickly becoming a viable option for companies interested in deploying deep learning for tasks ranging from computer vision, to speech recognition, to text analytics,” he said. “IBM’s enterprise offering of TensorFlow will help organizations deploy this framework -- we’re glad to see this support.”
The purpose of the exercise for IBM is to wedge itself into the rapidly expanding machine learning/AI marketplace, in particular, the new territory of enterprise AI that company thinks is ripe for its wares. Over the past few years, hyperscale companies like Google, Facebook, Microsoft and Amazon jump-started the machine learning industry, vaulting IBM cohort NVIDIA into prominence. Now Big Blue wants a slice of the action and believes its best shot at this market lies with its expertise in business, retail, banking, and other commercial domains. We certainly expect IBM to let us know if and when it starts winning over customers in this space with its S822LC/PowerAI solution.