Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

2026 Semiconductor Predictions: Here Come the AI Accelerators

We’re barely three years into the GenAI revolution, but considering how fast things are moving, it feels like it could be 20 years. One of areas of AI experiencing heightened innovation is semiconductors. Here’s what you can expect in 2026 in the hot world of chips and AI accelerators. Our first prediction comes from Ankur […]

The post 2026 Semiconductor Predictions: Here Come the AI Accelerators appeared first on HPCwire.

NVIDIA Releases New Physical AI Models as Global Partners Unveil Next-Gen Robots

LAS VEGAS, Jan. 6, 2026 — NVIDIA has announced new open models, frameworks and AI infrastructure for physical AI, and unveiled robots for every industry from global partners. The new NVIDIA technologies speed workflows across the entire robot development lifecycle to accelerate the next wave of robotics, including building generalist-specialist robots that can quickly learn many […]

The post NVIDIA Releases New Physical AI Models as Global Partners Unveil Next-Gen Robots appeared first on HPCwire.

LLNL, Stanford Researchers Report 3D Nanofabrication Approach for TPL Wafer-Scale Manufacturing

Researchers at Lawrence Livermore National Laboratory and Stanford University announced in December they have demonstrated a new 3D nanofabrication approach that they say transforms two-photon lithography (TPL) from a slow, lab-scale technique into a wafer-scale manufacturing tool without sacrificing submicron precision. Published in Nature, the team’s TPL platform uses large arrays of metalenses — engineered, ultrathin […]

The post LLNL, Stanford Researchers Report 3D Nanofabrication Approach for TPL Wafer-Scale Manufacturing appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

NVIDIA Releases Details on Next-Gen Vera Rubin AI Platform — 5X the Performance of Blackwell

Those who anticipated NVIDIA CEO Jensen Huang would delay delivering an update on its next big AI chip — the Vera Rubin processor first discussed last March at the company’s GTC conference in San Jose — until the upcoming GTC conference in two months were surprised last night when Huang released details about the chip […]

The post NVIDIA Releases Details on Next-Gen Vera Rubin AI Platform — 5X the Performance of Blackwell appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

AMD Contemplates And Engineers Yottascale AI Compute

Under Lisa Su’s more than eleven years as AMD’s chief executive officer, the company has returned from Opteron exile to be a formidable CPU foe to Intel in the datacenter, due in large part to the innovation around its Zen microarchitecture and Epyc server chips driven by Su, chief technology officer Mark Papermaster, and countless others.

AMD Contemplates And Engineers Yottascale AI Compute was written by Jeffrey Burt at The Next Platform.

Nvidia’s Vera-Rubin Platform Obsoletes Current AI Iron Six Months Ahead Of Launch

Having an annual cadence for the improvement of AI systems is a great thing if you happen to be buying the newest iron at exactly the right time.

Nvidia’s Vera-Rubin Platform Obsoletes Current AI Iron Six Months Ahead Of Launch was written by Timothy Prickett Morgan at The Next Platform.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics