News

Viewing posts for the category Feature Article

US Regains TOP500 Crown with Summit Supercomputer, Sierra Grabs Number Three Spot

FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—The TOP500 celebrates its 25th anniversary with a major shakeup at the top of the list. For the first time since November 2012, the US claims the most powerful supercomputer in the world, leading a significant turnover in which four of the five top systems were either new or substantially upgraded.

Thomas Sterling Talks Exascale, Chinese HPC, Machine Learning, and Non-von Neumann Architectures

On Wednesday evening at the ISC High Performance conference, HPC luminary Dr. Thomas Sterling will deliver his customary keynote address on the state of high performance computing. To get something of a preview of that talk, we caught up with Sterling and asked him about some of the more pressing topics in the space.

Sandia to Install First Petascale Supercomputer Powered by ARM Processors

Sandia National Laboratories will soon be taking delivery of the world’s most powerful supercomputer using ARM processors. The system, known as Astra, is being built by Hewlett Packard Enterprise (HPE) and will deliver 2.3 petaflops of peak performance when it’s installed later this year.

Summit Up and Running at Oak Ridge, Claims First Exascale Application

The Department of Energy’s 200-petaflop Summit supercomputer is now in operation at Oak Ridge National Laboratory (ORNL).  The new system is being touted as “the most powerful and smartest machine in the world.”

As Moore’s Law Winds Down, Chipmakers Consider the Path Forward

At this month’s ISC High Performance conference, representatives from Intel, NVIDIA, Xilinx, and NEC will speak about the challenges they face as applications like machine learning and analytics are demanding greater performance at a time when CMOS technology is approaching its physical limits.

NVIDIA Brings HPC and AI Under Single Platform with HGX-2

At Taiwan’s GPU Technology Conference this week, NVIDIA founder and CEO Jensen Huang announced the HGX-2, a 16-GPU reference design aimed at some of the most computationally demanding HPC and AI workloads. As a reflection of its tightly integrated design, Jensen characterized the platform as the “the world’s largest GPU.”

Intel Lays Out New Roadmap for AI Portfolio

At Intel’s inaugural AI DevCon conference this week, AI Products Group chief Naveen Rao updated their roadmap for its artificial intelligence chips. The changes will impact the much-anticipated Neural Network Processor, and to a lesser degree, its general-purpose products like Xeons and FPGAs.

Chip Startup Unveils Processor That Aims to Scale the Datacenter Power Wall

Tachyum, a Silicon Valley startup has unveiled a new processor that the company says can tackle a broad range of workloads in HPC, data analytics, artificial intelligence, and web services, while using a fraction of the power of existing chips.

Did Google AI Just Pass the Turing Test?

Google has demonstrated an artificial intelligence technology that represents the most sophisticated example to date of a computer engaging in natural conversation with a human. Upon hearing the interaction, some listeners felt the software had convincingly passed the Turing test.

CERN Prepares for New Computing Challenges with Large Hadron Collider

Thanks to the discovery of the Higgs boson in 2012, CERN’s Large Hadron Collider (LHC) has probably become the most widely recognized science project on the planet. Less well-known is the computing infrastructure that supports this effort and the demands that are placed upon it.