Although there was a trend of steady progress in the Green500, nothing has indicated a big step toward newer technologies.
The system to snag the No. 1 spot for the Green500 was MN-3 from Preferred Networks in Japan. Knocked from the top of the last list by NVIDIA DGX SuperPOD in the US, MN-3 is back to reclaim its crown. This system relies on the MN-Core chip, an accelerator optimized for matrix arithmetic, as well as a Xeon Platinum 8260M processor. MN-3 achieved a 29.70 gigaflops/watt power-efficiency and has a TOP500 ranking of 337.
Detailed report on the Fujitsu Fugaku system.
Setting aside the relatively recent rise of electronic signatures, personalized stamps have been a popular form of identification for formal documents in East Asia. These identifiers – easily forged, but culturally ubiquitous – are the subject of research by Raja Adal, an associate professor of history at the University of Pittsburgh. But, it turns out, […]
The post Pittsburgh Supercomputer Powers Machine Learning Analysis of Rare East Asian Stamps appeared first on HPCwire.
The next meeting of the Advanced Scientific Computing Advisory Committee (ASCAC), the federal advisory committee for the US Department of Energy Office of Science’s program on Advanced Scientific Computing Research (ASCR) will take place on Wednesday, September 29th, and Thursday, September 30th, 2021, from 11-3 EDT via Zoom. For information on how to join the meeting, please visit […]
The post DOE’s Advanced Scientific Computing Advisory Committee to Meet Sept. 29-30 appeared first on insideHPC.
Making sense of ML performance and benchmark data is an ongoing challenge. In light of last week’s release of the most recent MLPerf (v1.1) inference results, now is perhaps a good time to review how valuable (or not) such ML benchmarks are and the challenges they face. Two researchers from Purdue University recently tackled this […]
The post Purdue Researchers Peer into the ‘Fog of the Machine Learning Accelerator War’ appeared first on HPCwire.
Sponsored When it comes to compute engines and network interconnects for supercomputers, there are lots of different choices available, but ultimately the nature of the applications — and how they evolve over time — will drive the technology choices that organizations make. …
JAMSTEC Goes Hybrid On Many Vectors With Earth Simulator 4 Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
SANTA CLARA, CA – September 27, 2021 – Astera Labs, maker of connectivity solutions for intelligent systems, today announced raising $50M as part of an oversubscribed Series-C funding round led by Fidelity Management and Research. Fidelity was joined in this funding round by Atreides Management and Valor Equity Partners, with continued participation from existing investors […]
The post Astera Labs Secures $50M in Series-C Funding appeared first on insideHPC.
If they are doing their jobs right, the high performance computing centers around the world in academic and government institutions are supposed to be on the cutting edge of any new technology that boosts the performance of simulation, modeling, analytics, and artificial intelligence. …
NSF Puts $10 Million Into Composable Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
Building on the successful implementation of the Partnership for Advanced Computing in Europe (PRACE), the European Commission (EC) has increased its efforts to develop a world-class supercomputing ecosystem in Europe. The EC, EuroHPC Joint Undertaking (JU) and EU Member States have made significant investments in European petascale and pre-exascale infrastructure, have put exascale supercomputers on the roadmap, and are actively exploring new post-exascale architectures. The return on investment will be directly linked to the productivity of end-users in academia, in industry, and in the public sector. Key to this productivity is an ecosystem of user-oriented software: scientific applications and workflows …
The only new entry in the Top10 is the Perlmutter system at NERSC at the DOE Lawrence Berkeley National Laboratory. It is based on the HPE Cray “Shasta” platform and a heterogeneous system with both GPU-accelerated and CPU-only nodes. Perlmutter achieved 64.6 Pflop/s which put it at No. 5 in the new list.
Supercomputer Fugaku, a system based on Fujitsu’s custom ARM A64FX processor remains No. 1. It is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, the location of the former K-Computer. It was co-developed in close partnership by Riken and Fujitsu and uses Fujitsu’s Tofu D interconnect to transfer data between nodes. Its HPL benchmark score to 442 Pflop/s easily exceeding the No. 2 Summit by 3x. In single or further reduced precision, which are often used in machine learning and AI applications, it’s peak performance is actually above 1,000 PFlop/s (= 1 Exaflop/s) and because of this, it is often introduced as the first ‘Exascale’ supercomputer. Fugaku actually already demonstrated this new level of performance on the new HPL-AI benchmark with 2 Exaflops! https://www.r-ccs.riken.jp/en/
read more »