The Pawsey Supercomputing Centre is expanding its Nimbus cloud service with NVIDIA V100 GPU-powered servers in order to provide additional capabilities for researchers.
The existing Nimbus infrastructure is relatively modest as HPC cloud setups go, consisting of 3,000 CPU cores, apparently based on the now-ancient AMD Opteron processors. (Why such old CPUs were used in a cloud service launched in 2017 is a mystery.) The new addition will be comprised of six HPE SX40 nodes, each equipped with two NVIDIA Tesla V100 accelerators.
The Pawsey press release offers up a colorful description of the added GPU capability: “These bad boys are built to accelerate Artificial Intelligence, HPC, and graphics. Powered by NVIDIA Volta architecture, they offer the performance of up to 100 CPUs in a single GPU.”
The release goes on to say that the addition of the V100 to their Nimbus cloud puts Pawsey in “the same leagues as Google, Amazon, IBM and Microsoft who have recently announced their cloud expansion using the same NVIDIA GPUs.”
Well… sort of. Those hyperscale giants almost certainly have more than six nodes of V100 GPUs deployed in their respective clouds. However, unlike the public cloud providers just mentioned, the Nimbus service is free for Australian university and government researchers.
As far as the Nimbus upgrade goes, the new V100 nodes are being installed now and early access is already being set up via an early adopter program. To be eligible, a researcher has to nominate a project that can employ the GPUs using existing software. The early adopter program will remain open until the system is fully commissioned for general use.
The supercomputing center has been on a bit of a roll of late. Just last month, Pawsey announced that the government was sending $70 million its way to replace its two aging Cray supercomputers.The procurement process is already in motion and the new system or systems are scheduled to be up and running in 2019. It remains to be seen if the replacement hardware will be outfitted with any GPUs.