News

Viewing posts for the category Blog

Managing HPC Failures Takes Forethought

Things do go wrong. I was recently on a train journey from Liverpool to London. Normally a two-hour direct service, a flooded line resulted in a five-hour excursion, by which time the meeting had finished without me. At the time, with frequent information announcements, passable WiFi (paid), free drinks proactively distributed (water only), food available (paid), and most importantly, electricity sockets, the five hours passed with less distress than I might have expected in hindsight. The lesson is that when things go wrong, what matters is how they are dealt with.

Centres of Excellence: Europe's Approach to Ensure Competitiveness of HPC Applications  

While there is always a lot of buzz about the latest HPC hardware architecture developments or exascale programming methods and tools, everyone agrees that in the end the only thing that counts are the results and societal impact produced by the technology. Results and impacts are coming from the scientific and industrial applications running on HPC systems. The application space is diverse ranging from astrophysics (A) to zymology (Z). So the question arises of how to effectively fund development and optimization of HPC applications to make them suitable for current petascale and future exascale systems.

HPC4Mfg shows why supercomputing matters to industry and consumers

Back in February, the US Department of Energy’s High Performance Computing for Manufacturing program (HPC4Mfg) announced its first awards for advancing industry manufacturing projects using national lab supercomputing resources and expertise.

Translating HPC speak

By Andrew Jones; High Performance Computing (HPC) is a vocal sport. We are always lobbying for something. Or giving talks at conferences. Or writing blogs. But sometimes we need a bit of help to translate what HPC people say into what they really mean …

Designing Energy-Aware MPI Communication Library: Opportunities and Challenges

Dhabaleswar K. (DK) Panda, Ohio State UniversityPower is considered the major impediment in designing the next-generation exascale systems. In recent years, the TOP500 and Green500 lists have been focusing on both performance and power consumption.  To address the power challenge, researchers and engineers are proposing solutions along multiple directions including: 1) exploring revolutionary architectures that compute at near threshold voltage (NTV) to minimize leakage power; 2) developing user-controlled mechanisms to control power (power levers) such as dynamic voltage and frequency scaling (DVFS), and core-idling; 3) increasing the efficiency of cooling subsystems; 4) extending the job scheduler and resource management schemes to optimize energy consumption; 5) optimizing the throughput of a system under a strict power budget; and 6) reducing the energy consumption of an application by optimizing the computation kernels and increasing data locality.However, without exception, all these approaches treat the communication runtimes as a black-box with regard to energy consumption. Several of these techniques use DVFS to reduce the energy consumption of the communication phase of an application.  However, such coarse-grain approaches lead to inefficient communication performance and hence, increase the total execution time of the application.  This leads to an open challenge: Can new techniques be designed to reduce the energy consumption of communication runtimes? These new techniques, if feasible, have the potential to deliver significant energy savings in conjunction with complimentary state-of-the-art techniques on next-generation exascale systems. The Message Passing Interface (MPI) is the de-facto communication runtime for most current-generation HPC systems.  In a recently published paper [1] that was selected as a finalist in the best student paper category at SC15, authors from The Ohio State University and Pacific Northwest National Laboratory asked the following two fundamental questions: 1) Can MPI communication runtimes be designed to be energy-aware? and 2) Can energy be saved during MPI calls without a loss in performance?To answer these questions, the authors proposed a set of designs that exploit the slack in MPI calls to save energy by applying a lower energy lever, using DVFS and/or core-idling. However, the challenge is when to apply a power lever to maximize the energy savings with no impact on performance.  The authors analyzed the behavior of different internal communication protocols used by MPI and proposed a set of designs to achieve fine-grained performance-energy trade-offs.The design also incorporated a user-defined parameter that sets a threshold on the maximum allowed performance degradation.  The proposed designs allow one to save as much energy as possible inside the MPI communication runtime, while guaranteeing no degradation more than the user-specified value. For instance, with the Graph500 application kernel, it was demonstrated that the MPI runtime can achieve 41 percent energy savings with minimal impact on performance (less than 4 percent) using 2,048 processes.The above study was done by modifying the MVAPICH2 MPI runtime [2]. Subsequent to this study, the MVAPICH2 team members have incorporated the proposed designs into an initial production-ready energy-aware runtime, known as MVAPICH2-EA [3]. In order to measure energy savings for MPI applications using the MVAPICH2-EA stack, the MVAPICH2 team members have also designed an OSU Energy Monitoring Tool (OEMT) [4]. Both MVAPICH2-EA and OEMT are publicly available. The MVAPICH2 team members are also exploring designs of collective algorithms using new transport protocols to save energy on InfiniBand clusters [5].[1] A. Venkatesh, A. Vishnu, K. Hamidouche, N. Tallent, D. K. Panda,D. Kerbyson, and A. Hoise, A Case for Application-Oblivious Energy-Efficient MPI Runtime, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis , Nov 2015 (Best Student Paper Finalist)[2] http://mvapich.cse.ohio-state.edu/[3] http://mvapich.cse.ohio-state.edu/downloads/[4] http://mvapich.cse.ohio-state.edu/tools/oemt/[5] H. Subramoni, A. Venkatesh, K. Hamidouche, K. Tomko, and D. K. Panda, Impact of InfiniBand DC Transport Protocol on Energy Consumption of All-to-all Collective Algorithms, 23rd International Symposium on High Performance Interconnects 2015, Aug 2015

The impact of the U.S. supercomputing initiative will be global

By Dona Crawford; Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). This bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

HPC for Economic Growth and Development

By Cynthia R. McIntyre; This is an exciting period as emerging and frontier market economies (as defined by the financial sector) adopt advanced technologies for economic competitiveness, societal benefit, and scientific discovery. Most notably, these economies are evaluating and acquiring advanced information and communication technologies as competitive platforms for socio-economic benefit.

How One HPC Center Learned to Count

By John West; Given the impact of HPC, it is important that we do everything we can to ensure that we are including the best solutions possible in the technologies that we build, and for that we need to be sure we are asking the broadest sampling of people for their best ideas.

Applying HPC for the benefit of society

By Sharan Kalwani; One of the main reasons I got into computing, admittedly a long time ago, was the potential that I personally saw in using this unbelievably powerful new tool towards addressing the needs of the society we live in.

Diversity in HPC Won’t Improve Until We Start Counting

By John West; I have chosen to work in HPC because the work we do makes the world a better place. As SC has illustrated effectively over the past two years, #hpcmatters.