News

Viewing posts from July, 2016

New Startup Targets Machine Learning, and Trends in HPC Software

Addison Snell and Chris Willard discuss machine learning from Wave Computing, plus three new research reports from Intersect360 Research.

Pondering AMD’s Ambitions for High-Performance APUs

AMD has flirted with the idea of building big brawny APUs for servers ever since the company starting developing the CPU-GPU hybrid chips begin back in 2006. Combining x86 and Radeon silicon on the same die for desktops and laptops was the basis for AMD’s original Fusion processor, later renamed as the Accelerated Processing Unit (APU). Now with the anticipation of the next-generation “Zen” CPU core and the future “Vega” GPU, it looks like a high-performance server APU could finally become a reality.

An Early Debut for NVIDIA’s Volta GPU?

The GPU rumor mill was grinding away this week with talk of an accelerated launch for NVIDIA’s next-generation Volta processor. Volta is the architecture that will succeed the current-generation Pascal design, which is the basis for the Tesla P100 GPUs destined for the HPC and deep learning markets. According to a report in Fudzilla, the first Volta parts may show up in 2017, a year ahead of NVIDIA’s original schedule.

EXTOLL's Network Marches to the Beat of a Different Drummer

Network latency and bandwidth often turn out to be choke points on application performance for many HPC codes. As a result, the network component for HPC systems has successfully resisted the trend toward general-purpose solutions, Ethernet notwithstanding. Such an environment is conducive to greater innovation and experimentation, as is exemplified in EXTOLL’s network technology

Startup Will Offer Custom-Built Deep Learning Computers

Silicon Valley's newest chipmaker, Wave Computing, came out of stealth mode this week, announcing a family of computers purpose-built for deep learning. The new systems are powered by the Wave Dataflow Processing Unit (DPU), a massively parallel dataflow processor designed to optimize the learning models. According to the company, the technology performs an order of magnitude faster than GPUs or FPGAs and is more energy efficient.