Water scarcity has been surfacing as an extremely critical issue worth addressing in the U.S. as well as around the globe nowadays. A McKinsey-led report shows that, by 2030, the global water demand is expected to exceed the supply by 40%. According to another recent report by The Congressional Research Service (CRS), more than 70% of the land area in the U.S. underwent drought condition during August, 2012.
When it comes to 2014, the condition has become even worse in some of the states: following a three-year dry period, California declared state-wide drought emergency. A report by NBC News on this drought quotes California Gov. Jerry Brown as saying, “perhaps the worst drought California has ever seen since records began being kept about 100 years ago”. Many such evidences of extended droughts and water scarcity have undoubtedly necessitated concerted approaches to tackling the global crisis and ensuring water sustainability.
Supercomputers are notorious for consuming a significant amount of electricity, but a less-known fact is that supercomputers are also extremely “thirsty” and consume a huge amount of water to cool down servers through cooling towers that are typically located on the roof of supercomputer facilities. While high-density servers packed in a supercomputer center can save space and/or costs, they also generate a large amount of heat which, if not properly removed, could damage the equipment and result in huge economic losses.
The high heat capacity makes water an ideal and energy-efficient medium to reject server heat into the environment through evaporation, an old yet effective cooling mechanism. According to Amazon’s James Hamilton, a 15MW data center could guzzle up to 360,000 gallons of water per day. The U.S. National Security Agency’s data center in Utah would require up to 1.7 million gallons of water per day, enough to satiate over 10,000 households’ water needs.
Although water consumption is related to energy consumption, they also differ from each other: due to time-varying water efficiency resulting from volatile outside temperatures, the same amount of server energy but consumed at different times may also result in different amount of water evaporation in cooling towers. In addition to onsite cooling towers, the enormous appetite for electricity also holds supercomputers accountable for offsite water consumption embedded in electricity production. As a matter of fact, electricity production accounts for the largest water withdrawal among all sectors in the U.S. While not all the water withdrawal is consumed or “lost” via evaporation, the national average water consumption for just one kWh electricity still reaches 1.8L/kWh, even excluding hydropower which itself is a huge water consumer.
Amid concerns over the tremendous amount of water required to run data centers and supercomputers, there have been an increasing interest in mitigating the water consumption. For example, Facebook and eBay have developed dashboard to monitor the water efficiency (Water Usage Efficiency or WUE in short) in run-time, while Google and NCAR-Wyoming Supercomputing Center (NWSC) are developing water-efficient cooling technologies, such as using outside air cooling, using recycled water and so on. These approaches, however, are merely targeting facility or infrastructure improvement, and they require high upfront capital investment and/or suitable climate conditions.
Why should supercomputers really care about water consumption? Well, there are a good number of reasons. Water conservation will not only benefit supercomputers in receiving tax credits and saving a portion of their annual utility bills, but also improve sustainability of supercomputers and help it survive extended droughts which are more and more frequent in water-stressed areas such as California where many large data centers and supercomputers are located. Water conservation will also benefit supercomputers in acquiring green certification, fulfilling their social responsibilities.
Motivated by the dearth of thorough research in supercomputer water efficiency and urgency of water conservation, a group of researchers at Florida International University have recently been targeting the field of data center and supercomputer water conservation. Unlike the current water-saving approaches which primarily focus on improved “engineering” and exhibit several limitations (such as high upfront capital investment and suitable climate), the research group devises software-based approaches to mitigate water consumption in supercomputers by exploiting the inherent spatio-temporal variation of water efficiency. Such spatio-temporal variation of water efficiency comes from our mother nature for free: volatile temperature results in time-varying water efficiency, while heterogeneous supercomputer systems across different locations lead to spatio variation of water efficiency.
The research group finds that the spatio-temporal variation of water efficiency is also a perfect fit for supercomputers’ workload flexibility: migrating workloads to locations with higher water efficiency and/or deferring workloads to water-efficient times. Effectiveness of the approach has been demonstrated via extensive experiment studies, reducing water consumption by 20% with almost no compromise in other aspects such as service latency. The promising results mark the first step to make a far-reaching change in the process of achieving supercomputer sustainability through water conservation, yet without upfront capital investment or facility upgrades.
If you operate a supercomputer in water-stressed areas undergoing drought conditions, the software-based approach may help your supercomputer survive droughts without costing you a single cent on facility upgrades.
The original article was published by HPCwire at http://www.hpcwire.com/2014/01/26/can-supercomputers-survive-drought/.Share on Twitter Share on Facebook