Episode 163: Addison Snell discusses how burst buffers are accelerating scientific solutions at NERSC with special guests Debbie Bard, Big Data Architect, National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab and Mark Wiertalla, Product marketing director of the storage solutions team at Cray.
The largest Internet company on the planet has made GPU computing available in its public cloud. Google announced this week that it has added the NVIDIA Tesla K80 to its cloud offering, with more graphics processor options on the way. The search giant follows Amazon, Microsoft and others into the GPU rental business.
The world’s first auto race between self-driving vehicles didn’t exactly go according to plan. One of the two robotic racecars misjudged a corner at high speed, crashing into the barrier and ending its hopes of finishing the contest.
The Tokyo Institute of Technology, also known as Tokyo Tech, has revealed that the TSUBAME 3.0 supercomputer scheduled to be installed this summer will provide 47 half precision (16-bit) petaflops of performance, making it one of the most powerful machines on the planet for artificial intelligence computation. The system is being built by HPE/SGI and will feature NVIDIA’s Tesla P100 GPUs.
If you’ve been tracking IBM’s newsfeed lately, you’ll notice the company is expanding its cognitive computing strategy to encompass more and more platforms, not to mention customers. Recent platform upgrades include IBM’s BlueMix cloud, private clouds in general, the internet of things (IoT), the z Systems mainframe, and even whiteboards.
Gamalon Inc, emerged from stealth mode this week, announced two machine learning products, based on an in-house technology known as Bayesian Program Synthesis (BPS). The company claims BPS can perform machine learning tasks 100 times faster than conventional deep learning techniques, while providing more accurate results.