Viewing posts from June, 2016

Energy-Efficient Supercomputing Comes of Age with TOP500-Green500 Merge

There was a time when the only thing that the high performance computing industry paid attention to was FLOPS. Indeed, for most of the history of HPC, floating point operations per second was the one true metric, and only those machines that delivered them in the largest quantities were deemed to be true supercomputers. Performance, after all, is HPC’s middle name.

South Africa Joins Petaflop Club

The South African Council for Scientific and Industrial Research (CSIR) has announced its newest supercomputer, which will deliver about a peak petaflop worth of computing power. The new machine, known as Lengau (the Setswana name for Cheetah), was procured from Dell and will be housed at the Centre for High Performance Computing (CHPC).

Intel Takes Aim at High-End Analytics with New Xeon E7 Processors

Intel has released its latest four-socket and eight-socket Broadwell-EX processors into the wild this week, which follows on the heels of the dual-socket Broadwell-EP chips the company launched at the end of March. The new chip family, known as the Xeon E7-8800/4400 series, are destined for scale-up servers running applications with prodigious appetites for memory and processor cores.

Centres of Excellence: Europe's Approach to Ensure Competitiveness of HPC Applications  

While there is always a lot of buzz about the latest HPC hardware architecture developments or exascale programming methods and tools, everyone agrees that in the end the only thing that counts are the results and societal impact produced by the technology. Results and impacts are coming from the scientific and industrial applications running on HPC systems. The application space is diverse ranging from astrophysics (A) to zymology (Z). So the question arises of how to effectively fund development and optimization of HPC applications to make them suitable for current petascale and future exascale systems.

A New Stampede in Texas and Taking Watson to the Edge

Addison Snell and Michael Feldman are joined by Chris Willard, PhD, as they discuss TACC's new supercomputer and the IBM/Cisco Watson partnership.

Stampede 2: The 18-Petaflop Sequel

The National Science Foundation (NSF) is spending $30 million on the second iteration of Stampede supercomputer, which will provide 18 petaflops worth of compute to tens of thousands of scientists and researchers across the US. Named after its predecessor, Stampede 2, will double the FLOP-count of the original system, which cost $27.5 million when it came online in 2013. As with the original Stampede, the new machine will be housed at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (UT Austin).

Looking for an Exaflop? This Cloud Provider Has It

For those of you anxiously anticipating the first exaflop supercomputer, your wait is over – sort of. HPC cloud specialist Rescale is already offering more than 1.4 exaflops of computing power across its global network. The company, which casts its cloud as a “unified HPC simulation platform for the enterprise IT environment,” says its infrastructure currently encompasses 8 million servers spread across 30 datacenters. In aggregate, that works out to over 1400 petaflops of peak computing power, according to the Rescale website.