By: Michael Feldman
The Texas Advanced Computing Center in Austin has installed the world’s largest solar-powered HPC system. The 400 teraflop supercomputer, known as Hikari, is an HPE Apollo 800 cluster that uses technology supplied by Japanese green energy specialist NTT FACILITIES. In addition to taking advantage of Austin’s abundant sunshine, the power setup also employs high voltage direct power (HVDC) to further reduce energy consumption.
Although TACC hasn’t made a habit of focusing on energy-efficient supercomputing, it has dabbled with the technology. For example, in 2010 it trialed Green Revolution Computing’s CarnotJet system to see how much energy they could save with the company’s immersive cooling setup. With Hikari, they’re looking to optimize power usage on the other end, by creating their own energy grid and optimizing delivery by skipping the AC-to-DC conversion step. The project was launched more than a year ago, as part of the center’s effort to use “alternative energy sources to power some of its high-performance computers.”
The solar setup at TACC is fairly modest, with 250 kilowatts of panels shading a few dozen spaces in the TACC parking lot. When the sun is shining, the panels can power Hikari running full tilt. At night, and presumably also on cloudy days, the supercomputer switches back to the external utility grid.
Although the solar energy aspect of the system is the feature most likely to capture the public’s attention, the use of the 380-volt HVDC technology may have even greater importance. According to the NTT FACILITIES engineers, the use of direct current could save 15 percent of the energy compared to conventional AC-based power delivery. Most everything in the datacenter, from the lights to the servers run on DC power, and using direct current avoids the overhead of conversion losses.
In this case, DC especially useful since the local solar panels are generating direct current, so it’s advantageous not to do the conversion to AC going into the datacenter, and then the conversion back to DC going into the machine. Actually, due to the complexities of datacenter power delivery, up to four conversions would be required. Using end-to-end DC makes sense not only for solar panels, but also for any local power setup – wind turbines, fuel cells, on-site power generators, and so on.
As far as the Hikari cluster itself, it’s fairly conventional. It uses 432 HPE Apollo 8000 servers equipped with Intel “Haswell” Xeon CPUs and Mellanox EDR InfinBand adapters. (Interesting tidbit: this is TACC’s first deployment of EDR.) The machine uses Apollo’s warm water cooling system, which lowers the number of fans required and eliminates the need for chilled water, further reducing energy consumption.
Besides being a technology demonstration project, Hikari will be used for TACC’s computing needs associated with HIPAA/FISMA compliant data, as well as jobs that come in via the XSEDE Science Gateways. It’s not set to go into production until later in 2017.