Power and Communication Needs of Supercomputers

By Kevin J. McCarthy Sr.  
OTHER PARTS OF THIS ARTICLEPt. 1: High-performance Computing (HPC) Is Likely to Become MainstreamPt. 2: This PagePt. 3: Tackling HPC's Massive Heat Generation

HPCs require 480 volts, whereas today's computers generally require 120 volts. Therefore, an HPC facility will not require power distribution units (PDUs) to convert the UPS system's 480-volt output to 208/120 volts. Instead, a 3-phase 480-volt direct power line will be run to each HPC cabinet. PDUs typically have an efficiency of only 95 percent, so eliminating them will increase the overall efficiency of the electrical system.

Supplying 480 volts directly to the HPC is the way manufacturers are trying to maximize available power and keep wire sizes small. Each HPC cabinet will consume 45 kilowatts. With a 3-phase, 480 volt supply, this equates to 54 amps. Each Cray XT6 cabinet therefore requires a 3 phase, 100 amp, 480 volt feed.

The load of an HPC is large, and might seem daunting, but it may require owners to rethink how facilities in the data center are handled. An HPC could force a site to become as energy efficient as possible. This would actually lower the power usage effectiveness (PUE) of the site and improve efficiency, even though the power required is increasing. This load could easily push even robust UPS systems near their capacity, which will only make them operate more efficiently, but may also hasten an upgrade to the facility. Any installation of this nature would require significant planning.

To support the communications requirements of an HPC and the massive bandwidth it needs, there is a communications protocol called InfiniBand that employs a switched-fabric topology, as opposed to a hierarchical switched network like Ethernet. InfiniBand uses multiple connectors for speed and redundancy in the communications network. InfiniBand is a communications link for data flow between processors and input/output devices that offers throughput of up to 2.5 gigabytes per second and support for up to 64,000 addressable devices. InfiniBand is also scalable and supports quality of service and failover. While InfiniBand is not exclusive to the HPC market, it is primarily used there, where it is also competing with the 10-gigabit Ethernet system. What can be expected with an InfiniBand, or 10-gigabit Ethernet, system is a more robust communications network with more fiber than is typically deployed.

Contact FacilitiesNet Editorial Staff »

  posted on 2/2/2011   Article Use Policy

Related Topics: