At Taiwan’s GPU Technology Conference this week, NVIDIA founder and CEO Jensen Huang announced the HGX-2, a 16-GPU reference design aimed at some of the most computationally demanding HPC and AI workloads. As a reflection of its tightly integrated design, Jensen characterized the platform as the “the world’s largest GPU.”
At Intel’s inaugural AI DevCon conference this week, AI Products Group chief Naveen Rao updated their roadmap for its artificial intelligence chips. The changes will impact the much-anticipated Neural Network Processor, and to a lesser degree, its general-purpose products like Xeons and FPGAs.
Tachyum, a Silicon Valley startup has unveiled a new processor that the company says can tackle a broad range of workloads in HPC, data analytics, artificial intelligence, and web services, while using a fraction of the power of existing chips.
Google has demonstrated an artificial intelligence technology that represents the most sophisticated example to date of a computer engaging in natural conversation with a human. Upon hearing the interaction, some listeners felt the software had convincingly passed the Turing test.
Thanks to the discovery of the Higgs boson in 2012, CERN’s Large Hadron Collider (LHC) has probably become the most widely recognized science project on the planet. Less well-known is the computing infrastructure that supports this effort and the demands that are placed upon it.