Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

The Current AI Networking Wave Will Be A Tsunami Of Money By 2027

$230.70. That’s it.

If you take the $34.6 billion that Arista Networks has made in product revenue since it was founded way back in 2004 by Andy Bechtolsheim, David Cheriton, and Kenneth Duda and divide it by the 150 million cumulative ports that it has shipped (with the product ramp really starting in 2010 after the company dropped out of stealth mode in 2009) This is a remarkable number give the fact that Arista has tended to ship very expensive ports that often cost $1,000 or more without services on top of them.

The Current AI Networking Wave Will Be A Tsunami Of Money By 2027 was written by Timothy Prickett Morgan at The Next Platform.

NSF Launches $100M National Quantum and Nanotechnology Research Infrastructure Program

Feb. 13, 2026 — The U.S. National Science Foundation is investing up to $100 million to establish a nationwide network of open-access research facilities for quantum and nanoscale technologies, innovation, and workforce training. Through the new NSF National Quantum and Nanotechnology Infrastructure (NSF NQNI) program, NSF will support up to 16 sites over five years, […]

The post NSF Launches $100M National Quantum and Nanotechnology Research Infrastructure Program appeared first on HPCwire.

Los Alamos Consolidates Quantum Research Under New Center

Last week Los Alamos National Laboratory revealed that it will be uniting its various quantum computing research groups with the creation of a new Center for Quantum Computing. The new center, located in downtown Los Alamos, is designed to coordinate research spanning algorithms, hardware evaluation, hybrid quantum-classical workflows, and national security applications, while also reinforcing […]

The post Los Alamos Consolidates Quantum Research Under New Center appeared first on HPCwire.

Argonne’s Sheng Di Wins IEEE Award for Excellence in Scalable Computing

Sheng Di, a computational scientist in the Mathematics and Computer Science division at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has been selected for the IEEE Technical Committee on Scalable Computing Award for Excellence in Scalable Computing (Middle Career Researcher). The award, which includes a $500 prize, is in recognition of Di’s pioneering contributions to error-bounded scientific data […]

The post Argonne’s Sheng Di Wins IEEE Award for Excellence in Scalable Computing appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

Oak Ridge Partners with NuScale on AI-Guided Nuclear Fuel Management

NuScale Power Corporation (NYSE: SMR) said it will partner with Oak Ridge National Laboratory to utilize an artificial intelligence-enabled nuclear design framework for a 12-NuScale Power Module ....

The post Oak Ridge Partners with NuScale on AI-Guided Nuclear Fuel Management appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

The Memory Crunch Pinches Cisco’s Profits

It has taken many years for the AI boom to reach the general ledgers and balance sheets of the world’s largest original equipment manufacturers, and one might say that it has taken particularly long for Cisco Systems, the dominant supplier of switching and routing in the enterprise and traditional telco/service provider spaces as well as a respectable systems supplier with over 90,000 customers using its UCS converged server-switch platforms.

The Memory Crunch Pinches Cisco’s Profits was written by Timothy Prickett Morgan at The Next Platform.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics