News

Improving Supercomputing Accuracy by Sacrificing Precision

None
Nov. 4, 2016

By: Michael Feldman

In what seems paradoxical, a group of computer scientists have demonstrated that reducing the mathematical precision in a supercomputing computation can actually lead to more accurate solutions.  The premise of the technique is to apply the energy savings reaped from lower precision calculations toward additional computation that will improve the quality of the results.

In a paper written by a team of researchers from Argonne National Laboratory, Rice University, and the University of Illinois at Urbana-Champaign, the approach is summarized thusly: “We pick as our base computational budget the amount of energy consumed to solve a given problem to a given error bound using double-precision arithmetic. We then examine how that same energy budget can be used to improve the error bound, by using lower-precision arithmetic: We save energy by replacing high precision with lower precision and reinvest these savings in order to improve solution quality.”

Today, with the prospect of exascale systems, the first of which are expected to show up in 2020 and draw at least 20MW of power, computer scientists are scrambling to find technologies that can to make these future systems practical to operate. If a supercomputer’s yearly electric bill runs into tens of millions of dollars, that’s going to have enormous consequences on how HPC evolves in the next decade.

The idea of using lower precision calculations to save energy has been around for well over decade, even before this notion of green computing became fashionable. One of the paper’s researchers, Rice University's Krishna Palem, wrote about sacrificing mathematical precision for the purpose of deriving energy savings back in 2003. This concept of ”inexact computing” is based on the notion that computational accuracy and energy are fungible, and in many cases turning the knob down on accuracy can lead to significant energy savings. The trick is to identify those parts of the software where that tradeoff is worthwhile.

Palem is now the director of the Rice University Center for Computing at the Margins (RUCCAM), a new project devoted to figuring out how hardware and software should be architected for an emerging set of applications limited by computing resources. That applies not only to applications in supercomputing, but also cloud environments, mobile computing, and embedded systems. Power use has become a limiting factor on scalability at the high end as well as for machines untethered from the power grid.

In 2003, Palem’s principle focus was on hardware, suggesting the floating point unit could be customized according to application needs, using the example of a co-design approach for climate models. But today, commodity-based vector hardware in both CPUs and GPUs enables the software to save power by invoking lower precision operations. As the paper states:

“Vector operations can perform twice as many single-precision operations as double-precision operations in the same time and using the same energy. Single-precision vector loads and stores can move twice as many words than double precision in the same time and energy budget. The use of shorter words also reduces cache misses. Half precision has the potential to provide a further factor of two.”

But this idea of banking the low-precision energy savings to be spent on additional computation in order to improve the application’s accuracy is new.  This mathematics of this technique, which the researchers dub “reinvestment,” is explained in some detail in the paper. It’s based on a 17th century numerical analysis method, known as Newton-Raphson, which was named for its creators, Isaac Newton and Joseph Raphson.  Palem says the approach is analogous to “calculating answers in a relay of sprints rather than in a marathon.” It enables the software to compute successively more accurate results as it iterates through the mathematical function.

For those only interested in reducing the power bill, the researchers say you can double the energy efficiency just with judicious use of single precision, without sacrificing the quality of the result. Reducing precision can also bump up performance – a nice little side-effect of shuttling fewer bits through your application.

As in Palem’s 2003 article, the researchers point to the example of weather and climate simulations. It’s an area where the demand for greater resolution seems to be constantly at odds with power consumption. For these codes, there is little appetite to reduce the fidelity of the models, but increasing it has required buying ever-larger supercomputers demanding increasing amounts power. As a consequence, the researchers believe this is an ideal application set for their approach, and one that is particularly valuable to society.

The research team thinks that the quality of such codes could be improved by more than three orders of magnitude, using the same power consumption as before. That would have far-reaching consequences for these applications, and would effectively extract extra performance from existing hardware. A test case appears to be underway at European Center of Medium Range Weather Forecasting, where RUCCAM is help the team there improve the resolution of their forecasting model.

The research is being supported by the US Department of Energy, the Defense Advanced Research Project Agency, and the Guggenheim Foundation