On November 12-13, High Performance Computing developers will descend on Salt Lake City, Utah for the 2016 Intel HPC Developer Conference just prior to SC 16. As in past years, Intels 2016 conference agenda is packed with keynotes, learning sessions, and hands-on labs hosted by global leaders in the supercomputing field.
High-performance computing (HPC) is driving new insights and discoveries in the fields of life sciences and oil and gas. Intel Scalable System Framework (Intel SSF), Intels HPC approach incorporating a single, balanced architecture for scalable clusters that can run codes used for simulation, analytics, visualization, and machine learning, is powering new breakthroughs in these fields.
Performing apples-to-apples optimized performance comparisons between different machine architectures is always a challenge. Intel has observed that the CPU implementations of many machine learning and deep learning packages have not been fully optimized for modern CPU architectures. For this reason, Intel made a number of machine learning announcements following the recent launch of Intel Xeon Phi (formerly Knights Landing) processors.
New HPC products and technologies. Compelling demos. Insights from top Intel HPC architects. More than 50 presentations from Intel and industry experts. Additional details about Intel Scalable System Framework.
Deep learning has reinvigorated research in machine learning and inspired a gold rush of technology innovation in a variety of fields and across a wide range of markets ranging from Internet search, to social media, to real-time robotics, self-driving vehicles, drones and more.
At ISC14, over 18 months ago, Intel began talking about their next-generation fabric technology, Intel Omni-Path Architecture. Their new fabric for High Performance Computing and low-latency scale-out systems is part of the Intel Scalable Systems Framework, a new design approach for scalable, flexible HPC systems.
Code modernization is one of those buzzwords that is becoming widely accepted despite its ambiguity. The term is used to describe the optimization of existing and new community and commercial codes to take advantage of the manycore systems with their highly parallel architectures that are coming online.
Watching Intel over the years as they have continually integrated more of PC and server resources into their silicon and provided consistent architecture roadmaps, its not surprising that they are now taking that approach on a system level to advance high-performance computing (HPC).
An interview with Dr. David Rohr, a postdoctoral scholar at the Frankfurt Institute for Advanced Studies (FIAS). He is in charge of the GPU-based real-time track reconstruction for the ALICE experiment at CERN.
This article is excerpted from an interview conducted at SC14 with Raj Hazra, VP Data Center Group and General Manager, Technical Computing Group at Intel, by Addison Snell, CEO, Intersect360 Research.