The demand for space-based high performance computing might be small right now, but for the time being, Hewlett Packard Enterprise (HPE) has cornered the market.
On November 1, the company announced it was making its Spaceborne Computer available to perform actual work for astronauts aboard the International Space Station (ISS). The system has been operating on the ISS for a year as an experiment to determine if off-the-shelf HPC servers could withstand the rigors of outer space – things like cosmic radiation, power outages, zero gravity, and temperature fluctuations. Although the machine was based on standard HPE Apollo 40 technology, it used “software-hardening” to maintain hardware reliability and robustness.
The year-long experiment involved running HPC benchmarks like High Performance Linpack (HPL), the High Performance Conjugate Gradients (HPCG) suite, and the NASA-derived NAS parallel benchmarks. Result were compared to similar servers back on Earth to make sure the Spaceborne Computer was operating correctly. According to HPE, the experiment was a success.
“Our mission is to bring innovative technologies to fuel the next frontier, whether on Earth or in space, and make breakthrough discoveries we have never imagined before,” said Dr. Eng Lim Goh, Chief Technology Officer and Vice President, HPC and AI, HPE. “After gaining significant insights from our first successful experiment with Spaceborne Computer, we are continuing to test its potential by opening up above-the-cloud HPC capabilities to ISS researchers, empowering them to take space exploration to a new level.”
That “new level” involves putting the system into production, at least in a limited sense. The intention is to enable the ISS astronauts to run on-board “data analyses” having to do with space research. Presumably, this work would be based on analysis of the high-resolution image and video data being collected by various ISS experiments.
Those computations are bound to be somewhat limited though, since the Spaceborne Computer can only deliver about one teraflop – less than can be had on a single high-end Xeon processor, nowadays. Nonetheless, the value here is to provide a proof point for doing actual HPC-type work that isn’t tied to Earth-bound datacenters.
This kind of capability becomes a real necessity once a spaceship heads out to other planets and thus loses the ability the communicate with terrestrial supercomputers in a timely manner. For example, it takes three minutes to send data at the speed of light from Earth to Mars, which for time-critical computations is orders of magnitude too long. Even for Earth-orbit scenarios, real-time analytics work would need to be performed on the spacecraft simply because communication latencies would make remote computations impractical.
If this second phase of the experiment is successful, the next logical step would be to put a more powerful system on the space station – something that could perform much more demanding computations. For the time being, HPE is focused on using off-the-shelf technology for these systems. At some point though, NASA may be forced to explore more customized solutions, given the power, weight, and durability demands.
A timeline of the Spaceborne Computer project can be found here.