Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

Scientific AI Enters a More Mature Phase: Three Projects Explain Why

Three research stories show that AI is entering a more mature phase. Scientists are using AI to better understand complex research data and extend foundation models deeper into scientific discovery. A common theme across these projects is that AI is moving beyond experimentation toward practical integration into research workflows. Researchers are using AI to handle […]

The post Scientific AI Enters a More Mature Phase: Three Projects Explain Why appeared first on HPCwire.

SambaNova Pits Its Engineering Against Nvidia For Agentic AI

VAST Data Introduces Polaris to Orchestrate AI Data Infrastructure Across Hybrid Multicloud Environments

Industry’s first global control plane purpose-built for AI data infrastructure spanning hyperscale cloud and datacenter deployments SALT LAKE CITY, Feb. 25, 2026 — Today at VAST Forward 2026, VAST Data announced Polaris, a global control plane designed to provision, operate and orchestrate distributed AI infrastructure across public cloud, neocloud and on-premises datacenter environments. Polaris transforms VAST […]

The post VAST Data Introduces Polaris to Orchestrate AI Data Infrastructure Across Hybrid Multicloud Environments appeared first on HPCwire.

Some More Game Theory, This Time On The AMD-Meta Platforms Deal

AMD and Meta Expand Partnership with 6 GW of AMD GPUs for AI Infrastructure

AMD’s strategic struggle to carve out a growing piece of the GPU pie from market dominator NVIDIA took a positive turn for the challenger today with the announcement that AMD and Meta have agreed to a 6-gigawatt deal for AMD Instinct GPUs in an agreement estimated at $100 billion. The companies said .....

The post AMD and Meta Expand Partnership with 6 GW of AMD GPUs for AI Infrastructure appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

Quantum: IonQ Selected to Support Missile Defense Agency SHIELD IDIQ Contract

COLLEGE PARK, Md. — February 23, 2026 — Quantum company IonQ (NYSE: IONQ) announced it was awarded a contract under the Missile Defense Agency Scalable Homeland Innovative Enterprise Layered Defense (SHIELD) indefinite-delivery/indefinite-quantity (IDIQ) contract with a ceiling of $151 billion. This contract encompasses a range of work areas that allows for the rapid delivery of […]

The post Quantum: IonQ Selected to Support Missile Defense Agency SHIELD IDIQ Contract appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics