Silicon Valley startup Wave Computing announced its dataflow compute appliance for machine learning is now available via an early access program, with general availability scheduled for Q4 2017. According to the company, its custom-built appliance can deliver 2.9 petaops of performance and train neural networks an order of magnitude faster than the current crop of GPU-accelerated hardware.
Simon Fraser University (SFU) has officially launched Canadas most powerful academic supercomputer. The new 3.6-petaflop system, known as Cedar, is just the beginning of a big push by the Canadian government to upgrade the network of its 50 aging HPC machines used to serve the nations academic research community.
When a poker-playing AI program developed at Carnegie Mellon university challenged a group of machine learning-savvy engineers and investors in China, the results were the same as when the software went up against professional card players: it beat its human competition like a drum. And that points to AIs greatest strength, as well as its greatest weakness.
A supercomputing application that can figure out if state legislative districts have been unfairly drawn, has the potential to change electoral politics in the United States. According to its inventors at the University of Illinois at Urbana-Champaign, the application could be used by courts to determine if partisan gerrymandering has been used to unfairly manipulate these maps.
This week Russia posted its list of the 50 most powerful supercomputing systems installed in the country. From the looks of things, not much is happening in the HPC space there. System turnover across the Top50 during the last six months was minimal and aggregate performance barely budged.
Although Googles Tensor Processing Unit (TPU) has been powering the companys vast empire of deep learning products since 2015, very little was known about the custom-built processor. This week the web giant published a description of the chip and explained why its an order of magnitude faster and more energy-efficient than the CPUs and GPUs it replaces.
Ministers from seven European countries have signed on to a plan to develop an exascale capability based on technology developed within the EU member states. The goal is to bring up two pre-exascale supercomputers by 2020 and two full exascale systems no later than 2023.
In November 2015 when NVIDIA CEO Jen-Hsun Huang proposed that machine learning is high performance computings first killer app for consumers, there was only sketchy evidence to back up that claim. Today though, it looks like the NVIDIA chief was just a little ahead of his time.
This week in San Francisco, Intel held its first Manufacturing and Technology Day, an event designed to reassure investors and customers that Moores Law is alive and well and delivering the cost and performance benefits it has for the last 50 years. However, to make that claim viable, the chipmaker has recast the law to deal with the realities of a slowdown in transistor shrinkage.
The Engineering and Physical Sciences Research Council (EPSRC) has allocated 20 million for six new tier 2 supercomputing centers spread across the United Kingdom. The facilities are aimed at supporting both academic and industrial users and will house medium-sized supercomputers for scientific research and engineering.