Cray and Microsoft have announced a new HPC cloud solution that brings dedicated supercomputers into Azure datacenters.
Just to be clear, this is not a general supercomputing-on-demand service. Far from it. The customer will still have to purchase the system from Cray, which will then be installed at an Azure datacenter. Cray will manage the system for the customer, and Azure will provide the network plumbing, along with its suite of cloud services.
Customers will get exclusive access to their system, which will be hooked into the Azure backbone so they can use all the cloud services that Microsoft provides. This includes things like cloud bursting, data lake storage, object storage, and AI/ learning services. The big draw here is that huge amounts of Azure-stored data and Cray supercomputers can exist side-by-side, which is particularly advantageous to data-hungry applications like seismic analysis, medical imaging, and engineering simulations, to name a few.
Cray will be offering both its XC and CS supercomputing lines, but it’s the XC supercomputer that will likely be biggest draw for would-be Microsoft customers, since it offers a level of system integration and scalability that is missing from the Azure cloud. Microsoft does offer general-purpose HPC solutions, where you can configure a few thousand cores into an Azure cluster, but that’s a far cry from a dedicated supercomputer with a high-performance Aries interconnect and Lustre-attached parallel file system storage.
“It’s tightly a couple environment for mixed cloud and dedicated workloads, which is something that doesn’t exist anywhere today,” said Barry Bolding, Cray Senior VP and Chief Strategy Officer.
According to Bolding, Cray has been looking at this market opportunity for a while, and has come to the conclusion there is a growing pool of HPC cloud customers who are consuming compute cycles on a pretty regular basis. At some point, they figure buying all this computation on-demand becomes cost-prohibitive, especially compared to purchasing a dedicated machine that they can get unfettered access to. But many of these same customers don’t have either the facilities or the expertise to bring a large supercomputer in-house. Not only that, but they have also gotten used to the convenience of storing their data in the cloud and taking advantage of the services that comes along with it.
As Cray reaches further into the commercial space, it seems to be encountering these kinds of customers with greater frequency. But Bolding says even those in the public sector – the weather/climate sites, universities, and other government organizations -- are interested in the idea of having someone else manage their supercomputers for them.
“The customers who we’ve been talking with to fill the pipeline are a mix of enterprise cloud customers looking to take the next step, and customers who have been HPC on their own for a while and are looking to get out of the datacenter business,” he said.
In particular, more of Cray’s customers are running supercomputing workloads that have become tightly coupled to cloud services, in part because so much of their data is being stored there. “But you don’t want to be drinking through a straw all the time,” Bolding noted. “You want to put your Cray close to the cloud data.”
From the customer’s point of view, they will access the machine on their company’s virtual private network, which will include hooks into Azure storage and other resources, if necessary. The idea here is to provide a relatively seamless experience for the end user, while retaining the performance and scalability of the underlying supercomputing hardware.
“I think customers world-wide are trying to figure this problem out,” said Bolding.