In Depth News

Viewing posts for the category In Depth

Japan to Blaze New Territory with 130-Petaflop AI Supercomputer

The Tokyo-based National Institute of Advanced Industrial Science and Technology (AIST) is taking bids for a new supercomputer that will deliver more than 130 half precision petaflops when completed in late 2017. The system, known as the AI Bridging Cloud Infrastructure (ABCI), is mainly being built for artificial intelligence developers and providers, and will be made available as a cloud resource to researchers and commercial organizations.

Intel Retools Product Roadmap with AI Silicon

At Intel’s recent AI Day, the chipmaker previewed a series of future products that it intends to use to unseat GPUs as the de facto standard for machine learning. The one-day event was Intel’s most assertive pronouncement of its intentions to become a major player in the artificial intelligence market.

Green500 Reaches New Heights in Energy-Efficient Supercomputing

The new Green500 list of the most energy-efficient supercomputers demonstrates some significant progress from last year. Thanks to the new manycore processors from Intel and NVIDIA that are starting to penetrate the top systems, performance per watt numbers are on the rise.

US Exascale Computing Project Ramps Up Software Effort, Hardware R&D in Pipeline

At SC16, during a birds-of-a-feather (BoF) session held Wednesday afternoon, a room full of supercomputing enthusiasts listened attentively to the latest developments at the Exascale Computing Project (ECP). Department of Energy (DOE) representatives were on hand to deliver updates on the software and hardware efforts that the project is undertaking.

Intel Makes Presence Felt at Supercomputing Conference

Although Intel had no blockbuster reveals at this year’s supercomputing conference (SC16), they did issue a flurry of announcements to remind everyone that they still dominate much of the componentry in the HPC industry. And if anything, they are looking to extend their hegemony, as well as move into new areas.

SC16 Panel to Take on Memory Challenges of the Exascale Era

Just as the choice of processors architectures in supercomputing is expanding with GPUs, FPGAs, ARM and Power, memory is beginning to diversify as well. Novel technologies like 3D XPoint, resistive RAM/memristors, and 3D memory stacks are already starting to work their way into the hands of HPC users. At SC16 this week in Salt Lake City, one of the Friday panels, “The Future of Memory Technology for Exascale and Beyond IV,” will delve into this subject more deeply.

Global Supercomputing Capacity Creeps Up as Petascale Systems Blanket Top 100

FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.— The 48th edition of the TOP500 list saw China and United States pacing each other for supercomputing supremacy. Both nations now claim 171 systems apiece in the latest rankings, accounting for two-thirds of the list. However, China has maintained its dominance at the top of the list with the same number 1 and 2 systems from six months ago: Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops. This latest edition of the TOP500 was announced Monday, November 14, at the SC16 conference in Salt Lake City, Utah.

Mellanox Unveils HDR InfiniBand Lineup

The rollout of 200 Gbps networking began in earnest this week with Mellanox’s unveiling of its initial HDR InfiniBand portfolio of Quantum switches, ConnectX-6 adapters, and LinkX cables. Although none of the products will be available until 2017, the imminent move to 200 Gbps is set to leapfrog the competition, in particular, Intel, with its current 100 Gbps Omni-Path technology.

Good Times for FPGA Enthusiasts

The prospect of FPGA-powered supercomputing has never looked brighter. The availability of more performant chips, the maturation of the OpenCL toolchain, the acquisition of Altera by Intel, and the world’s largest deployment of FPGAs in the datacenter by Microsoft, suggest that reconfigurable computing may finally fulfill its promise as a major technology for high performance computing.

Improving Supercomputing Accuracy by Sacrificing Precision

In what seems paradoxical, a group of computer scientists have demonstrated that reducing the mathematical precision in a supercomputing computation can actually lead to more accurate solutions.  The premise of the technique is to apply the energy savings reaped from lower precision calculations toward additional computation that will improve the quality of the results.