India is getting ready to field the country’s most powerful supercomputer to date. According to a report in The Hindu, the 10-petaflop system will be installed this June, returning India to the upper echelons of supercomputing.
Google has revealed its cloud customers will get first crack at Intel’s next-generation “Skylake” Xeon processors. The company is targeting the new cloud instances for HPC workloads in healthcare, media and entertainment, financial services, and other industries.
The National Aeronautics and Space Administration (NASA) has built a new modular supercomputing facility at its Ames Research Center in Silicon Valley that could be the template for future HPC infrastructure at the agency.
Episode 163: Addison Snell discusses how burst buffers are accelerating scientific solutions at NERSC with special guests Debbie Bard, Big Data Architect, National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab and Mark Wiertalla, Product marketing director of the storage solutions team at Cray.
The largest Internet company on the planet has made GPU computing available in its public cloud. Google announced this week that it has added the NVIDIA Tesla K80 to its cloud offering, with more graphics processor options on the way. The search giant follows Amazon, Microsoft and others into the GPU rental business.
The world’s first auto race between self-driving vehicles didn’t exactly go according to plan. One of the two robotic racecars misjudged a corner at high speed, crashing into the barrier and ending its hopes of finishing the contest.
The Tokyo Institute of Technology, also known as Tokyo Tech, has revealed that the TSUBAME 3.0 supercomputer scheduled to be installed this summer will provide 47 half precision (16-bit) petaflops of performance, making it one of the most powerful machines on the planet for artificial intelligence computation. The system is being built by HPE/SGI and will feature NVIDIA’s Tesla P100 GPUs.