One of the biggest impediments to more widespread use of AI is the lack of developer expertise in machine learning software. Bonsai, a startup based in Berkeley, California, is looking to change that in a big way by offering a platform that abstracts away a lot of the low-level nuts and bolts that makes machine learning such a daunting challenge for businesses.
This week IBM demonstrated software that was able to significantly boost the speed of training deep neural networks, while improving the accuracy of those networks. The software achieved this by dramatically increasing the scalability of these training applications across large number of GPUs.
The Chinese government has released a three-phase plan to become the world leader in artificial intelligence. According to a document published by the country’s State Council, the development and deployment of AI is seen as a strategic opportunity to the country, and will be worth 10 trillion yuan (1.49 trillion USD) to the nation’s economy by 2030.
While a number of commentators have written off AMD’s prospects of competing against Intel in HPC, testing of the latest server silicon from each chipmaker has revealed that the EPYC chip offers some surprising performance advantages against Intel’s newest "Skylake" Xeon destined for the datacenter.
A report published by James Kisner, an equity analyst at global investment banking firm Jeffries, shot a few holes in IBM’s Watson and the company’s cognitive computing strategy. Along the way, Kisner offered some interesting insights into the AI market and some of the major players competing in the space.
After already shipping more than half a million of its next-generation Xeon products to customers, Intel officially launched its new Xeon scalable processor product line. The chipmaker is calling it the “biggest data center advancement in a decade.”
Japanese computer-maker Fujitsu is developing an AI-specific microprocessor called the Deep Learning Unit (DLU). The company’s goal is to produce a chip that delivers 10 times better performance per watt than the competition.
For all the supercomputing trends revealed on recent TOP500 lists, the most worrisome is the decline in performance growth that has taken place over the over the last several years – worrisome not only because performance is the lifeblood of the HPC industry, but also because there is no definitive cause of the slowdown.
In March, ministers from seven of the largest European countries signed a declaration that established a timeline for fielding two exascale supercomputers in 2022. The agreement also specified that at least one of these systems will be based on European technology, although, as it turns out, not everyone seems to think this is the best way forward.
Even though there wasn’t much turnover in the latest TOP500 list, a number of new petascale supercomputers appeared that reflect a number of interesting trends in the way HPC architectures are evolving. For the purposes of this discussion, we’ll focus on three of these new systems: Stampede2, TSUBAME 3.0, and MareNostrum 4.