News

HPE Unveils Prototype of "The Machine"

None
Dec. 5, 2016

By: Michael Feldman

Hewlett Packard Enterprise has demonstrated a prototype of “The Machine,” the company’s much talked about R&D project to develop a new computer architecture for the big data age. But rather than developing a reference platform for future systems, the effort has been refocused to develop a set of technologies that will be scattered across HPE’s product portfolio.

The Machine prototype, which HPE has dubbed version 0.9, was unveiled at the company’s recent Discover London 2016 conference. "We have achieved a major milestone with The Machine research project -- one of the largest and most complex research projects in our company's history," said Antonio Neri, Executive Vice President and General Manager of the Enterprise Group at HPE, in a prepared statement on November 28th.

In addition to demonstrating the prototype, HPE has also rebranded the concept underlying The Machine as “memory-driven computing.” The idea is to put memory at the center of the architecture in order to deal with the realities of a data-centric computing world. That includes not just enterprise computing, but high performance computing (HPC), the Internet of Things (IoT), and web-scale computing. All these application segments are being deluged with so much data that traditional memory subsystems are not up to the task – in either speed or capacity.

Of course, HPE is not the only one pursuing data-centric computing. IBM has been doing this for years, as has Intel and practically every other IT provider that wants to survive in the 21st century. HPE however is alone in developing an integrated platform, which, up until recently, seemed like it was going to be the template for the entire portfolio.

Originally, HPE was counting on using memristors as the centerpiece of The Machine to solve the memory conundrum. Memristors promised DRAM-like speed, non-volatility, and even independence from conventional silicon-based technology. As a universal memory, it would flatten the memory/storage hierarchy, changing the nature of computing in a fundamental way. As it turns out though, memristors have proved difficult to implement, and the prototype demonstrated last month contained not a single one.

What it did contain were server nodes that housed regular DRAM, along with two to four terabytes of conventional) non-volatile memory (NVM), which were likely implemented as NVDIMMs. An unspecified CPU in the form of a customized SoC was used as the compute engine. The DRAM and NVM memory were lashed together with a silicon photonic fabric based on HPE’s X1 optical interconnect technology. By using light as the transfer media, X1 is able to shuttle data across an optical fiber at the rate of up to 1.2 terabits per second. The extreme transfer speeds between the memory components, as well as the larger memory footprint enabled by NVM, are the central features of the architecture.

HPE claimed execution performance “on a variety of workloads” of up to 8,000 times faster than on conventional architectures. The 8,000x figure was for financial models; lesser speedups were claimed for similarity search (300x), large-scale graph inference (100x), and in-memory analytics (15x). Since HPE didn’t supply much detail beyond that, it’s difficult to ascertain the significance of those metrics with regard to the specific technologies -- non-volatile memory, silicon photonics, and their customized SoC – or the synergies between them. These individual technologies are key since while The Machine may never become a distinct offering, HPE has plans to fold them into existing or future product lines.

For example, even if the memristor effort never bears fruit, the company intends to bring some form of byte-addressable non-volatile memory to the server market in the 2018/2019 timeframe, most likely in the form of non-flash NVDIMMs. At this point, it looks like HPE is counting on its collaboration with SanDisk to develop this technology. SanDisk has been working on 3D resistive RAM (ReRAM) componentry for some time, and they believe it will eclipse NAND-based memory across a number of applications.

If that effort pans out, ReRAM will be incorporated into board-pluggable NVDIMMs, which can act as slightly slower, but more capacious main memory for server nodes. An early version of this already exists in the Proliant Gen9 server line as 8GB NVDIMMs, but in this case using conventional NAND technology.

The X1 silicon photonics technology is also headed for productization -- in the near-term as part of HPE’s “Synergy” systems, which are due to roll out next year. Synergy is an architecture designed for what HPE is calling “composable infrastructure,” a term that refers to the ability of the system to reconfigure its resources according the type of workload being run. It’s a software-defined framework for all the datacenter machinery (compute, network, storage) that’s under the control of a single API. HPE is also planning to integrate silicon photonics into other product lines, include its storage solutions, as early as 2018/2019.

The silicon photonics technology will also be part of future server products that use the same type of fabric-attached memory demonstrated in the prototype. Down the road though, the fabric will support Gen-Z, a recently devised high performance interconnect protocol that abstracts away physical memory characteristics. The idea is to provide a hardware-agnostic protocol independent of processor or memory type.

HPE is also developing software for the architecture. For example, within the next year HPE plans to integrate some of the open source code it developed for Grommet and Hortonwork’s Spark into existing server lines that can take advantage of memory-centric features. In the 2018/2019 timeframe, the company will add more advanced analytics and other types of applications to future products.

While the company’s products all stand to benefit from these technologies in some fashion, all The Machine’s building blocks of The Machine might indeed end up in an integrated platform if HPE decides to pursue its exascale ambitions. Although the company has not put any stakes in the ground with regard to delivering such systems, its recent acquisition of SGI suggests its pursuit of this market (if it can even be called that) is already underway. Certainly, if HPE manages to develop superior memory technologies and photonic interconnects over the next few years, it will have a compelling story to tell potential exascale customers in 2020.

Here is a background video on The Machine, courtesy of HPE.

<iframe src="https://www.youtube.com/embed/2VG59FYkPdM" width="780" height="439" frameborder="0" allowfullscreen="allowfullscreen"></iframe>