After just three years in the field, the High Performance Gradients (HPCG) benchmark is emerging as the first viable new metric for the high performance computing crowd in decades. The latest HPCG list compiled last November shows 115 supercomputer entries spread across 16 countries.
Chinese-base tech giant Alibaba is challenging American cloud providers in Europe with an HPC service designed for users running a variety of compute-intensive and data-intensive workloads. The company also unveiled a new cloud-based quantum computing platform.
Lenovo has unveiled the ThinkSystem SD650, a densely constructed, direct water cooled server aimed at the HPC market. Its first big test will come later this year when it's deployed in Germany's most powerful supercomputer, the SuperMUC-NG.
During Congressional testimony last week, Intel and NVIDIA proposed that the US government institute policies that would speed development of artificial intelligence. The testimony was submitted to the House Subcommittee on Information Technology, headed by Texas Congressman Will Hurd.
Closing the books on 2017, Cray announced one of the worst financial reports in recent memory, reporting a net loss of $133.8 million. It marks the first time since 2009 that the company went into the red and represents the second worst net loss in its history.
Google has announced a beta program to make its in-house Tensor Processing Units (TPUs) available to cloud customers, marking the first time a custom-built machine learning chip will be accessible to a wide array of users.
Chip startup Lightmatter has received an infusion of $11 million from investors to help bring the world’s first silicon photonics processor for AI to market. Using technology originally developed at MIT, the company is promising “orders of magnitude performance improvements over what’s feasible using existing technologies.”
According to a news report in People’s Daily Online, China is planning to launch a pre-exascale supercomputer this year that could outperform Summit, a US machine developed for the Department of Energy that is expected to top 200 petaflops when deployed later this year.
The Department of Energy (DOE) is soliciting proposals for research projects that will receive early access to Aurora, the first exascale supercomputer to be deployed in the US. There’s one catch though: the DOE is not telling anyone about machine’s architecture.
A consortium of European and Brazilian organizations dedicated to pushing the boundaries of HPC for the energy sector has released a report on how exascale computing will be needed to move the industry forward.