By: Merle Giles, National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign
Two years ago I accepted a challenge by Anwar Osseyran from SURFsara in Amsterdam to cooperate on a book about industrial HPC. Not from the classical industrial sense of a supplier, but from the perspective of HPC usage by industry. There are very few centers in the world that serve industry in this way and, as a consequence, little understanding of the science and engineering depth accomplished by the private sector. As a guy with business experience I know that business drivers pull technology, but our HPC communities do not have much evidence of this. So we have a newly published book that might have been titled “Why We Care About Industrial HPC”. Readers of the book will learn that industrial science and engineering is rather special, and rather extreme at times. Yet much of our image of HPC for industry is limited to how commercial engineering codes scale, which is very little. But this is not the only story.
A 1982 census of supercomputers revealed early adoption of high-performance computing (HPC) by industry with 12 of 41 U.S. Class VI supercomputers owned by private companies. Early usage was seen in England, Germany, France, and Japan for military purposes as well as physics/engineering, petroleum, weather, and energy. Ford Motor Company provided testimony to a U.S. Congressional Committee in 1985, and adoption widened after the integrated chip revolution fostered more affordable clusters in the 1990s. Today, HPC usage by large companies is widespread in the areas of manufacturing, aerospace, oil & gas, life sciences, and information technology.
The classical usage for HPC is advanced 3D modeling and simulation, which is understandable when building things that need to fit together in assemblies and systems. Less well understood is how modeling and simulation benefits researchers in bioengineering, or how bioinformatics uses HPC for data analytics and insight.
NCSA’s Private Sector Program (PSP) currently serves a couple dozen companies and organizations that wish to improve product workflows. Over 30 years. PSP has partnered with more than one-third of the U.S. FORTUNE 50 companies and nearly 60 percent of the U.S. FORTUNE 100 manufacturers. Deep involvement with oil & gas companies leverages our talent in physics-based simulations. Large companies tend to use both commercial and proprietary applications on supercomputers, while smaller companies tend to stick with commercial vendor applications.
But neither NCSA nor the USA is the only place this is done. As a member of an informal community called the International Industrial Supercomputing Workshop, I have witnessed how multiple countries have served industry directly through HPC, including the UK, Germany, The Netherlands, Finland, France, Italy, South Korea, Spain, Sweden, and Japan. Forty contributors from 11 countries in this community helped create a book about industrial use of HPC.
In the book, we tell a story about how industry demand for supercomputing is rooted in science and engineering in much the same way as it is in academia. Differences occur in the areas of discovery and knowledge, with academic scientists seeking scientific insights, whereas industrial scientists and engineers seek insight that manifests in products and services, typically on a shorter time-to-discovery scale. Big science applications can use large fractions of supercomputers, whereas engineering typically uses smaller ones. Academic research allocations on supercomputers may lead to publishable discoveries with no guarantee of subsequent allocations, whereas industrial use tends to run iterations of simulations to seek an optimal solution. The former can mostly be described as capability computing when the researcher makes optimal use of the machine capabilities to solve one large complex problem; the latter is often described as capacity computing.
This book is not intended to be a comprehensive review of the global state of supercomputing. It does, however, attempt to give the reader a global sense of technological purpose. Taming these fickle beasts take an immense amount of work. Provisioning the vast technical resources, operating the systems, and programming them requires a concerted effort by large teams of people. Because they are not mass-market machines, procurement costs are high, and deployment is risky. Once harnessed, the technical aspects of these super slick computers become commoditized and costs decrease.
There are certain scientific challenges that have to reach a certain threshold of scale to encompass the entire problem. Attempts at lesser scale are irrelevant. Large-scale supercomputers, therefore, are simply the only way to learn certain truths. Three diverse examples include star formation, stress testing macro economies, and determining the way in which water freezes at the molecular level. The freezing water case study is in the book, along with descriptions of its industrial impact.
It is our hope that this book will aid in the understanding that collaboration is especially valuable and that national productivity and wealth are inextricably linked to industrial productivity. Recognizing the benefits of shared investments in supercomputing and the codependencies between public and private sectors, as well as among nations, places a burden and an opportunity on the shoulders of the supercomputing community.
This blog contains excerpts from the book. Co-editors are Anwar Osseyran, University of Amsterdam and SURFsara B.V. and Merle Giles, National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign
Industrial Applications of High-Performance Computing: Best Global Practices is available for sale at CRC Press (www.crcpress.com, Catalogue no. K20795). Save 20% when you order online and enter Promo Code AVP17. The book may also be ordered on Amazon.com (ISBN: 978-1-4665-9680-1).