As China, the US and Japan near the finish line in exascale race, the DOE and NSA are sounding the alarm that the United States is at grave risk of losing its dominant position in high performance computing. According to the assessment of the two agencies, “absent aggressive action by the US – the US will lose leadership and not control its own future in HPC.”
The report was based on a September 2016 meeting of 60 representatives from government and industry that spanned the DOE, both the Office of Science and the National Nuclear Security Administration (NNSA), as well as National Security Agency (NSA), the National Science Foundation (NSF), the Intelligence Advanced Research Projects Activity (IARPA), industrial users, HPC vendors, and academia. The general recommendation of the report is to increase US government investment in HPC substantially, encompassing not just exascale computers, but post-Moore’s Law technologies as well. Absent those investments, the US risks losing HPC leadership in “less than a decade.”
A large part of the assessment is centered on China’s 2016 deployment of its 93-petaflop TaihuLight supercomputer, currently the most powerful system in the world, and on a Linpack basis more powerful than the ten most powerful machines based in the US. TaihuLight incorporates indigenous Chinese microprocessor and systems engineering technology, and is powered by the ShenWei 26010 (SW26010), a 260-core system of a chip that delivers around three teraflops of peak performance.
Although TaihuLight is not a very well-balanced machine from standpoint of the amount of memory it has in relation to its computational power, it is not considered a stunt machine. The system is being used for real research work and is able to run applications at scale. In 2016, three of the six Gordon Bell finalists were applications that ran on this machine.
TaihuLight though, is just a reflection of a much more wide-ranging strategy to expand China’s HPC capability. Part of the country’s 13th 5-year plan is to establish a domestic HPC capability that can meet all its internal needs. Toward that end, China currently has three competing exascale efforts in motion at its national labs. And one of the main strengths of those efforts is that they can pursue new technologies without regard to supporting a large and unwieldy ecosystem of existing software and hardware, inasmuch as China does not have such an ecosystem.
From the report:
“Using a lightweight kernel and a lightweight core and letting go of compatibility with legacy codes is in keeping with the general approach of China’s HPC program. Extrapolating further, it could end up being a structurally lower-cost option than that which is provided by U.S. industry. This makes for a worrisome asymmetry and suggests that if the disruption plays out, U.S. providers would have a severely diminished role.”
The concern here is not that the US just loses the exascale race – a relatively inconsequential setback – but that the Chinese capture the reins of the HPC market, which would force the US into de facto dependency with one of its major economic competitors. That would put it in a position similar to most European and Asian countries, who now rely on US-based companies for HPC technology and products. The report’s authors believe that the HPC market, as it’s constructed today, is ripe for such a disruption, since, as they put it:
“With a relatively small market and limited growth for HPC, the U.S. vendor community struggles to make the case for increased private investment in the current model; self-funding, an alternative approach with lower margins, is not seen as feasible when it supports such a small set of users. Thus, the U.S. vendor community is effectively locked into the current high-cost model and is not well-positioned to respond to a disruptive alternative.”
Or course, the added worry is that China is not just an economic competitor, but a political and military one as well. That means that the US could find itself at technological disadvantage in defense and national security for the first time in generations, even as its economic advantage suffers from its loss of HPC leadership.
It’s hard to imagine a more desperate scenario, but is the situation really that dire? Probably not, at least not yet.
TaihuLight is a breakthrough supercomputer for the Chinese, to be sure, but its lack of memory is likely to be a severe limitation on many HPC applications. And although the system is currently head and shoulders above anything else in the field, three DOE pre-exascale systems being constructed for the CORAL program will outperform TaihuLight when they are deployed over the next 9 to 18 months. China and the US may leapfrog each other on the way to exascale, but the current biggest system on the block is of no particular relevance if you’re concerned about long-term HPC capability.
Further, wresting control of the HPC from US-based companies, is not a simple undertaking. It’s not enough just to build a processor equal to or even better than that of what Intel, NVIDIA, IBM and others are producing. Same goes for interconnects, memory, storage, and any other componentry. The Chinese have no magic technology or engineering approach that would relegate US-based technology to second tier status. (If the Chinese would be first to commercialize quantum computing or neuromorphic computing, that’s another story.) HPC componentry and systems are the result a great deal of market competition, and while supercomputers are relatively high-priced, their more mainstream cluster computing brethern are quite economical. They are also reasonably diverse, and becoming more diverse every year. To suggest that can all be rendered obsolete in 10 years seems far-fetched.
As we recently wrote in an article about the state of Chinese supercomputing, they are not as advanced as their top systems would lead you to believe. In the US, there seems to be a distinct tendency to over-hype Chinese supercomputing achievements. Whether that is a reflection of a “grass is always greener” syndrome, is the result of losing supercomputing hegemony in a rapidly democratizing industry, is a tactic to draw in more US government investments into HPC, or is a legitimate analysis, remains to be determined.
One other point to consider: The exascale race does not really serve the US (or any country) very well. Focusing on an arbitrary milestone that represents a particular level of performance seems like a poor way to advance an industry. It concentrates the attention of government funding, not to mention vendors, onto a specific timeline, rather than on continuously improving these technologies in a more organic fashion. In that sense, the five-year plans that the Chinese use seem like a better approach than scrambling for heroic funding during the last five years of the race to the next milestone. In any case, focusing on hardware performance metrics appears to lead to this sort narrow thinking about HPC capabilities.
One last point. The Chinese economy continues to expand faster than that of the US, and, depending on who you talk to, will reach the size of the US sometime between 2018 and 2028. Such an economy would be expected to field an HPC capability on par with that of the US. Furthermore, China and the US should both be able to maintain an indigenous and self-sustaining HPC capability for their own use, and it’s unreasonable to think either could prevent the other from doing so. In such a world, the US may no longer enjoy technological supremacy, but it can surely have the wherewithal to control its own future in HPC.