HPC Disruptors


In my role as an independent advisor to industry users of high performance computing, I recently helped to coordinate a private gathering of leading industry users of HPC – no HPC centers, or vendors, just the users of HPC in industry meeting as peers. One key topic we discussed was things that might significantly change how industry deploys and uses high performance computing, what I refer to as “HPC disruptors.”

What is an HPC disruptor?

Specifically, we looked at things that (a) were realistic, but not necessarily likely, within one to five years; (b) business-as-usual would not be sufficient to cope with the change; and (c) would affect industry users of HPC.

Of course, the immediate reaction of the HPC community to a question about changes to the HPC world would go something like this: GPUs! Cloud! AI!

A more considered reaction might follow the typical HPC thinking. First, consider the processors. (The HPC community loves tracking their roadmaps and benchmarking them). Second, consider the system level architecture. Third, we know it probably needs some I/O capability and storage capacity. Fourth, vaguely recall the hardware isn’t much use without software. Fifth, apparently, we need people too. Sixth (and it’s rare to get this far), what about the overall HPC market/community that we exist in?

In particular, we wanted to consider non-hardware disruptors – fourth, fifth and sixth on my list.

New programming methods

A popular and perennial topic when considering changes to HPC is to postulate that the programming languages or models will (must) change. The most established languages for scientific/engineering software are Fortran and C/C++. Add MPI and/or OpenMP to these and you have the bulk of HPC codes in use today, and arguably the bulk of current development too.

There is a surplus – and I do mean a surplus – of newer programming languages that have been proposed over the years: Java, F#, Python, UPC, co-array Fortran, Chapel, and Julia, to name just a few.

These new languages bring promises of portability, maintainability, performance portability, scalability, and so on. However, when considering switching to these new languages for practical cases, it’s common to conclude that most of these new languages seem biased towards coding of new use cases from scratch and aren’t designed to be integrated with the large amounts of existing code and software structures that most organizations already have. Mixed language environments are possible, but these weaken the benefits mentioned above.

It is thus easy to predict, as I have done before, that the dominant languages for HPC will remain Fortran, C/C++, OpenMP, MPI for the foreseeable future. The reasons for this are many, including the huge (validated!) legacy code base, the attraction of the path-of-least-resistance, the confidence/risks associated with known versus new, the availability of skills, and the availability of ecosystems tools.

I’ll add one caveat. Python seems to be gathering sufficient momentum in some sectors and demographics to suggest it will establish itself as a key HPC language. But, it won’t, replace Fortran or C/C++ over the next five years.

So, if languages stay the same, and the hardware diversity and pace of change continue, then how do programmers cope? Some would suggest that riding to the rescue are code generators, domain specific languages (DSLs), and similar tools.

These try to address the issues of the users’ desire for a fast route from science to performant code, whilst battling the diversity and complexity of the hardware technology, and while also acknowledging that application developers can’t be scientists and HPC technologists and software engineers.

The idea is that developers would be able to use a language that is able to reflect their scientific/mathematical description of a problem, then a tool automatically turns this into code that performs well on a range of architectures.

How is this disruptive? It enables someone with this technology to iterate algorithms or science models much faster or cheaper, potentially leading to a substantial competitive edge. What is the catch? See my previous comments about new languages :-)

Cybersecurity

There has been an historic assumption that HPC is relatively safe from cyber-attack. The rationale is that HPC environments have known users, operate inside firewalls, and aren’t as attractive as corporate systems. But, there is plenty of evidence that HPC systems are targets. This is partly due to the success of HPC; the more we broadcast how valuable it is to our competitive advantage, the more of a target it becomes. There is a balance to be struck between usability (ability to innovate and deliver value) and security (risk). Of course, industry already has security baked into their IT thinking, so what’s the disruptor? It’s understanding how HPC is different from other information technology and getting that usability/security balance better tuned than your competitors, or at least before you become a victim.

Market Dynamics – What if?

In my private gathering, we also discussed various other disruptors, but where we had the liveliest debate was on the “what ifs” of the market.

What if China joins or replaces the USA as a leading supplier of HPC hardware, software, or people?

What if Google, Amazon, Microsoft, Facebook, or some other hyperscale web company decides to in-source the server divisions of HPE or Dell, as Oracle did with Sun Microsystems? Or in-source AMD or Intel? Or design and sell their own processors?

What happens when (not if) over half of the HPC user base has cloud as their primary platform? Not the users running the biggest simulations – cloud doesn’t stack up well for that – but the dominant proportion of users who run on only a single node each?

How do these things change the market for the traditional HPC user, through effects on the supply chains?

People

There has been a realization over the last couple of years that it isn’t sustainable for HPC users to simultaneously be scientists and HPC technologists and software engineers. This has led to the successful Research Software Engineers (RSE) movement, among other things. These activities so far have been focused on programmers and users. But the same could apply to HPC managers and leaders within five years. Is it feasible for one person to have sufficient knowledge across all of HPC hardware, software tools, service tools, operational effectiveness, business aspects, commercial acumen, stakeholder management, to be a successful driver and leader of an HPC service or center?

As a community, we are starting to address where our future HPC programmers and users come from (see the recent ISC18 STEM Students Day), but where are our next HPC leaders coming from? Will they all start out as AI-in-the-quantum-cloud people and see HPC as merely an oddity? Will we, as we have done for hardware, have to make the best use of whatever we can get from the wider market? Arguably we do that already. University degrees with an HPC focus are still a rarity. Most HPC folk are converted physicists, chemists or meteorologists, etc.

But what does this mean in the context of running an HPC center or service? The path I am pursuing is to add business skills to HPC technical folk, for example, through my HPC Leaders Institute training in partnership with TACC. But will we have to find another path, perhaps figure out how to convert ITIL gurus or even MBAs into technically credible HPC leaders?

Andrew Jones can be contacted via twitter (@hpcnotes), via LinkedIn (https://www.linkedin.com/in/andrewjones/), or via the TOP500 editor.

 

Current rating: 2.4

Comments

There are currently no comments

New Comment

required

required (not published)

optional