In Depth News

Viewing posts for the category In Depth

Swiss Deploy World’s Fastest GPU-Powered Supercomputer

The recent upgrade to the Piz Daint supercomputer at the Swiss National Supercomputing Centre (CSCS), has thrust the machine into limelight here at the ISC High Performance conference on opening day. The Cray XC50 system turned in a Linpack result of 19.6 petaflops, which was good enough to capture the number three spot on the latest TOP500 list, and displace Titan, the former GPU supercomputing champ, running at Oak Ridge National Laboratory.

Lustre Architect Peter Braam Talks About His Latest Venture

In 1999, Peter Braam introduced Lustre, an open-source parallel file system which went on to become one the most popular software packages for supercomputing. Braam’s success with Lustre was just the start of a career in which he founded five companies and guided technology development in numerous others -- Sun Microsystem, Red Hat, and Xyratex, among them. He’s currently working with Cambridge University on the Square Kilometer Array project and its Science Data Processor effort.

Tracking the World’s Top Storage Systems

When assessing the fastest supercomputers in the world, system performance is king, while the I/O componentry that feeds these computational beasts often escapes notice. But a small group of storage devotees working on a project at the Virtual Institute for I/O (VI4IO) want to change that.

New Five Nanometer Transistor Unveiled by IBM and Cohorts

IBM and its partners have developed a novel technology to build 5nm chips, based on silicon nanosheet transistors. Compared to 10nm chips using FinFET transistors, the new technology promises to deliver a 40 percent performance increase, a 75 percent power savings, or some combination of the two.

NVIDIA Amps Up AI Cloud Strategy with ODM Partnerships

NVIDIA is hooking up with four of the world’s largest original design manufacturers (ODMs) to help accelerate adoption of its GPUs into hyperscale datacenters. The new partner program would give Foxconn, Inventec, Quanta and Wistron early access to the HGX reference architecture, NVIDIA’s server design for machine learning acceleration.

Google and IBM Battle for Quantum Supremacy

Building a quantum computer that can outperform conventional systems on certain types of algorithms looks to be tantalizingly close. As it stands today, Google and IBM appear to be the most likely candidates to claim that achievement. 

The New DOE Budget: A Model for Supply-Side Supercomputing

The FY 2018 Congressional budget request for the Department of Energy has been released, reflecting a White House that favors supercomputing infrastucture over scientific research. That turns out to be both good news and bad news for the HPC community.

Microsoft’s Plans for FPGAs in Azure Should Worry Traditional Chipmakers

At Microsoft’s recent Build conference, Azure CTO Mark Russinovich presented a future that would significantly expand the role of FPGAs in their cloud platform. Some of these plans could sideline the use of GPUs and CPUs used for deep learning from the likes of NVIDIA, Intel, and other chipmakers.

Google Reveals Major Upgrade and Expanded Role for TPU

In a blog post, penned by Google veterans Jeff Dean and Urs Hölzle, the company announced it has developed and deployed its second-generation Tensor Processing Units (TPUs). The newly hatched TPU is being used to accelerate Google’s machine learning work and will also become the basis of a new cloud service.

HPE Unveils New Prototype of Memory-Driven Computer

Hewlett Packard Enterprise has introduced what looks to be the final prototype of “The Machine,” an HPE research project aimed at developing a memory-driven computing architecture for the era of big data. According to HPE CTO Mark Potter: “The architecture we have unveiled can be applied to every computing category—from intelligent edge devices to supercomputers.”