Best Information Tool For Busy FMs
We will keep you updated with trends, education, strategies, insights & benchmarks to help drive your career & project success.
- Building Automation
- Ceilings, Furniture & Walls
- Doors & Hardware
- Equipment Rental & Tools
- Energy Efficiency
- Facilities Management
- Grounds Management
- Fire Safety/Protection
- Maintenance & Operations
- Plumbing & Restrooms
- Power & Communication
Commercial Data Centers Will Begin to Use High-Performance Computers
April 21, 2011 - Contact FacilitiesNet Editorial Staff »
Today's tip from Building Operating Management comes from Kevin McCarthy, vice president of EDG2 Inc.: High-performance computers are set to enter the data center.
Until recently, academic and government research centers were the exclusive domains of high-performance computing (HPC) because of the supercomputers' ability to perform sophisticated mathematical modeling. In 1976, the first Cray-1 supercomputer was installed at Los Alamos National Laboratory. Designed by Seymour Cray, whom many regard as the "father of supercomputing," the Cray-1 clocked a speed of 160 megaflops, or 160 million floating-point operations (FLOPS) per second. Last year, Cray Inc. installed what was then the world's fastest supercomputer at Oak Ridge National Lab; named "Jaguar," the XT5 System has a clock speed of 1.8 PetaFLOPs, or 1,800 trillion FLOPS per second. This surpassed the IBM "Roadrunner" system, installed in 2008, which was the first computer to pass the PetaFLOP barrier.
Fast-forward to March 2010, when Cray introduced the CX1000, said to be designed for the "typical" data center.
Cray’s and other manufacturers' cluster-based supercomputers are likely to become a mainstream solution for data centers that require high-availability clusters for 24x7 transaction processing. HPC will also become the required computing platform for Internet content providers if the forecast of a 3D Internet within five to 10 years is accurate. In addition, the advent of "P4" — "predictive, preventive, personalized, participatory" medicine — will require HPC analysis of each individual's genome, opening the way for a sea change in medicine, how doctors treat patients, and how medical technologies are applied.
Compared with today's typical data center, an HPC facility will require dramatic increases in power and cooling capacity. These massively paralleled networks of specialized servers have a load density of 700 to 1,650 watts per square foot, while most current data centers have a load density of 100 to 225 watts per square foot.
For facility managers responsible for data centers that require massive processing power, now is the time to learn about high-performance computing.