News

Bridges Supercomputer Boots Up at Pittsburgh

None
Aug. 31, 2016

By: Michael Feldman

Pittsburgh Supercomputing Center (PSC) has announced its newest supercomputer, known as Bridges, is up and running. According to the press release, the system entered production in July and already supports about 400 research projects across its university network.

 

 

Bridges was the result of a $9.65 million award from the NSF in 2014. The contract for the machine was given to HP (now HPE), which is supplying all the server infrastructure. Bridges’ intent was to bring supercomputing to “nontraditional users and research communities." At the time, that equated to big data applications, but not necessary outside traditional HPC domains. In fact, if you look at how the system has been allocated so far, it’s essentially being employed for science applications with a data analytics bent, including:

  • Visualizing the historical spread of diseases in conjunction with the NIH’s Models of Infectious Disease Agent Study (MIDAS) project
  • Mapping bacterial DNA from the intestines of humans in both healthy and diabetic patients
  • Determining the electronic structure of TIPS pentacene, a semiconductor molecule with applications in solar power cells
  • Modeling the cost/benefits of flu vaccine combinations
  • Assembling the genetic sequences of the Narcissus flycatcher and the Sumatran rhinoceros

As of today, 2,300 users have access to Bridges in areas such as neuroscience, machine learning, biology, the social sciences, computer science, and engineering.

Like many data-centric supercomputers, Bridges offers a relatively a modest number of FLOPS, but lots of memory: 895 teraflops and 130 TB, respectively. One of Bridges’ most notable features is that it’s one of the first supercomputers to be equipped with Intel’s Omni-Path interconnect fabric. But its real claim to fame is that it provides a rather eclectic mix of servers, each being suited for particular application profiles. PSC lists five server configurations:

  • 752 Regular Shared Memory (RSM) nodes: HPE Apollo 2000s, with 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU), 128GB RAM and 4TB of on-node storage
  • 16 RSM GPU nodes: HPE Apollo 2000s, each with 2 NVIDIA K80 GPUs, 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU) and 128GB RAM
  • 8 Large Shared Memory (LSM) nodes: HPE ProLiant DL580s, each with 4 Intel Xeon E7-8860 v3 CPUs (16 cores per CPU) and 3TB RAM
  • 2 Extreme Shared Memory (ESM) nodes: HPE Integrity Superdome Xs, each with 16 Intel Xeon E7-8880 v3 CPUs (18 cores per CPU) and 12TB RAM
  • Database, web server, data transfer, and login nodes: HPE ProLiant DL360s and HPE ProLiant DL380s, each with 2 Intel Xeon E5-2695 v3 CPUs (14 cores per CPU) and 128GB RAM. Database nodes have SSDs or additional HDDs.

It’s interesting to see the Integrity Superdomes in a supercomputer, since they are typically used for mission-critical enterprise work. Their presence here reflects the fact the they are HPE’s principal solution for servers with really large memory capacities, in this case 12TB. When HPE completes its acquisition of SGI, it will have the shared-memory UV systems to offer as well, assuming they retain the UVs after the acquisition.

Bridges is also in line for an upgrade rather soon. The press release says late summer, but considering there are just three weeks left until autumn, we assume the deployment is imminent. This second phase will add an additional 407 teraflops and 130 TB of RAM. It will include an additional 32 RSM nodes, 34 LSM nodes, and 2 ESM nodes, each equipped with the latest Intel Broadwell Xeon CPUs. The RSM node will also come with a couple of the new NVIDIA P100 GPUs to provide more FLOPS and offer deep learning researchers the latest hardware. When all is said and done, Bridges will top out at 1.3 petaflops and 274 TB of memory.