News

An Interview with Raj Hazra, Technical Computing Group Intel Corporation

By: Addison Snell, CEO, Intersect360 Research

This article is excerpted from an interview conducted at SC14 with Raj Hazra, VP Data Center Group and General Manager, Technical Computing Group at Intel. The complete video interview is available at here.

Snell: I’m here at the SC14 conference in New Orleans, with Raj Hazra, who is the VP of the Data Center Group at Intel, on the heels of some major Intel announcements. We had the Cori system at NERSC, the Trinity system that’s going into LANL as part of the ACES consortium with Sandia, and the Down Under GeoSolutions system was one of the largest commercial installations with Intel Xeon Phi.

So, Raj, Intel has a major presence at the SC14 conference, as usual. Can you talk a little bit about what some of the highlights are for Intel for SC14? What are some of the things that you have in the spotlight?

Hazra: The theme of the conference, “HPC Matters”, is absolutely appropriate because HPC truly does matter. The result is you see excitement around the use of HPC in all sectors of the economy. And it is just as exciting to be creating those capabilities in partnership with the rest of the industry. In a nutshell, if data is being created, computed on, moved or stored, we are working on some of the key challenges in delivering products and technologies in order to advance the state of the art for our customers – and for them to create and extend their businesses.

Snell:  One thing we’re hearing a lot about is the concept of modularity…scaling up and down, the notion of HPC building blocks. For Intel, what are those key building blocks?

Hazra: For HPC to advance it has to be a total solution addressing the key critical barriers to deploying capabilities. In fact, you have to look at the problem in all its dimensions including the compute piece, the data movement piece, the storage piece and—absolutely not to be ignored—using all of this which is the software piece.

We believe the key building blocks to that are rooted in Moore’s Law and our ability to deliver building blocks in compute, networking, interconnect and storage. But just as importantly, to enable the software tools, the programming methodologies and the libraries and toolkits that then the community can use to effectively get the return on their investments along with all of the capabilities. So truly, it is a building block approach, but it is a very connected, systems-oriented building block approach. And if you add hardware and software, it’s a very solutions-based building block approach. So, for us, it isn’t any one part. All of this needs to work together—hardware and software. And in hardware it’s the compute, the storage, the interconnect, and memory.

Snell: You mentioned software. We’ve identified it in our research as being on a critical path in scalability and the adoption of HPC going forward. As we go into the multi-core / many-core era, where does code modernization factor into your strategy for HPC?

Hazra: The use of HPC and the effective use of more and more complex technologies and capabilities is absolutely critical both in terms of delivering the user promise as well as keeping the economics going. And code modernization is like a bridge between multiple eras; without it you really can’t move from one era of return on investment to the next era. We believe modernization is the right term. People have used multiple terms like scalability, even parallel programming or parallelism, but it is about taking advantage of everything that both the system hardware and the system software can offer.

It isn’t just about one particular aspect of vectorization or threading. It is about literally taking a look at your code, restructuring it, and modernizing it to use every aspect of the capabilities in the system. We think modernization is the right word because it forces people to go back and look at two things. One is what in their code could be changed? But also, what about those changes then reflect forward in a sustainable way and effect the design of future systems?

In many ways, modernization affects not only the applications but the use of those applications and the capabilities they drive into next generation systems as well. We’re heavily invested in code modernization, simply not just because it’s an element of our strategy, it is the industry strategy to keep moving forward and using HPC more effectively in the future. We’ve got about 40 parallel computing centers, and they’re growing. We’ve got great investments in tools in making sure that the tools are available to developers in order to quickly deploy technologies and improvements in the core building blocks. And we continue to work very closely with very significant customers who are thinking about how to write their applications for the future. And working hand in hand with them to not only make sure that the bridge exists for them, but the bridge exists for us in moving the building blocks forward as well.

Snell:  At SC14 we saw some announcements of the DOE CORAL initiatives, for the Collaboration of Oak Ridge, Argonne, and Livermore. We saw an Oak Ridge announcement and a Livermore announcement, but we haven’t seen an Argonne announcement yet. Should we expect to hear something to do with Intel on this going forward?

Hazra: Well, if you read the DOE announcement that you referenced, obviously, that story hasn’t reached its final chapter yet.

Snell:  How about future products that you have coming out?

Hazra: This is very exciting time in terms of looking at the future. The next year or two is probably transformational for the industry and us as well in terms of capabilities we are planning to bring to the market.

We announced the next generation of the Xeon Phi processor called Knights Hill on 10 nanometer technology following Knights Landing. Now one could say, well that’s a little ways out, isn’t it? And it is, but it comes on the heels of the Knights Landing products slated for the second half of 2015 for which we have more than 50 system providers already signed up, extremely excited. And perhaps a larger indication of excitement in the ecosystem around this product is more than 100 petaflops of compute capacity already committed in deals to this new technology.

In addition to that and keeping with moving all the building blocks forward, we also announced that our Omni-Path interconnect technology has some very significant advances with end user benefits like giving us 30% more core density, lowering system costs,  increasing scaling in two tier configurations by 2.3x and reducing latency over what InfiniBand solutions can provide.

But we are not just a silicon processor company. We continue to look at other elements of the system, announcing some significant improvements in the Lustre high performance file system technology in the new Version 2.2 Edition of Intel’s Enterprise Lustre. So, we continue to keep moving. We have some wonderful products, but more importantly, we have some wonderful collaborations and work going on in the ecosystem to not just make these products happen, but to make these products happen for our customers when they’re made available. That’s really the excitement and you see that in the ecosystem every day as their work moves forward gratifyingly on our investments.