Critical Facilities Summit

4  FM quick reads on data centers

1. Hurricane Sandy Prompts New Data Center Thinking


Hurricane Sandy, the second costliest hurricane in United States history, is an object lesson in how the best-laid contingency plans can be shredded in no time when an extreme weather event hits. The storm has spurred a new way of thinking about data center resiliency.

One success story involved a company that took prior action to hold the water back. People welded doors on the lower level so that water couldn't get in, and built a wall in front of the fuel tank. But these success stories were relatively few and far between. In many cases, the flooding caused serious problems in data centers.

During the storm, water rose 13 feet above sea level, which caused New York City to expand its flood plain area and redraw its flood maps. A lot of basements in multitenant data centers were flooded, and that is where much of the critical infrastructure was located.

New York City code requires fuel to be stored at the bottom of buildings, which is why fuel pumps, tanks, and generators, were in the basement. While some facilities have generators on the roof, fuel pumps and fuel tanks got knocked over, and they couldn't bring fuel from the basement to the roof. UPS and transfer systems at higher levels didn't do much good when basements had floor-to-ceiling flooding.

There were similar problems in smaller data centers in high-rise buildings downtown, where pumps wouldn't function and power feeds weren't watertight. When water filled up basements, the oil tanks lifted off their carriages and piping connections to fuel oil were broken. Trucks came in with package pumping systems that were physically connected to the fuel oil risers in buildings to run the generators.


2.  Modular Data Centers Offer Flexibility

MODULAR DATA CENTERS are attractive options for facility managers for many reasons. Built offsite in a controlled environment, modular data centers primarily offer owners flexibility and speed. With proper planning and logistics, modular data center units can be built ahead of time or while the remainder of a project is under construction. Because modular units can be fabricated at an assembly plant and stored until ready for use, concurrent construction can occur while site permitting is processed, and while site work and support construction is underway. Modular, factory-built, repetitive units provide a higher level of quality due to the indoor controlled construction environment and the ability to make continuous improvements throughout the manufacturing process.

But don't get ahead of yourself. Engaging in early discussions with the local authority having jurisdiction is beneficial and may reveal a requirement for inspections at the assembly factory by a third-party approved by the authority having jurisdiction in addition to any UL or ETL inspection label. The greatest risk lies in poor assembly, which could lead to air and water leaks. The more joints introduced into the modular data center, the greater the chance for the leaks to occur, and therefore, the less efficient the unit assembly becomes

Not all modular data center designs are the same. There are essentially three types of modular designs:

  1. A complete modular data center including infrastructure and IT equipment room space, fabricated offsite and shipped to the site fully assembled in one piece. For ease of shipping, these units are often designed to comply with the international size requirements of intermodal (e.g. ISO standard) shipping containers.
  2. Individual components designed and assembled in a modular form, and shipped to the site for assembly into a complete data center with infrastructure and IT computer room space.
  3. Portions of the data center infrastructure or IT cabinets can be assembled offsite, including electrical equipment skids and cooling system packages, and then installed in existing or newly built spaces. This speeds installation and limits on-site construction time. Modules can consist of skid-mounted equipment or complete enclosures in one package suitable for exterior installation.
  4. A wide range of data center owners may consider modular data centers and modular components, including data center wholesalers, colocation providers, universities, and corporations.

3.  Monitoring Is Key To Data Center Efficiency

The key piece of an energy-efficient program in data centers is monitoring, says Jason Yaeger, director of operations of Online Tech.

"If you are not monitoring your IT load and what your critical infrastructure is using — if you have something that is using more electricity than it should — you will not know unless you are monitoring on a daily basis. We first invested money in monitoring," he says.

New data centers have the opportunity to incorporate energy efficiency into the original design, which is what the University of Arkansas for Medical Sciences, Little Rock, did with its 12,944 square foot primary data center that opened in December 2010. The building has an Energy Star rating of 77, says Jonathan Flannery, executive director of engineering and operations, campus operations. The 54-building campus has been an Energy Star partner for five years and added the new facility to its existing portfolio.

In order to separate the data load for monitoring, a modem on the UPS tracks the energy used by the data center and reports it to the building management system. Some of the energy-saving features being used in the data center include using outside air to cool the building instead of chilling the air with air handlers. The data center also uses other typical technologies such as hot water controls, which are reduced during the evening, automated lighting controls and occupancy sensors, and a digital power management system.

The program dates back to at least 2000, when "we were able to get funding and could make major impacts to our program." Energy efficiency is important, said Flannery, because "to make a dollar at a hospital, it requires a lot of work."

Every dollar the hospital makes costs 70 cents, according to Flannery. "If I don't have to spend a dollar on utilities, we can save a dollar that can go somewhere else—to patient care, buying a new MRI, education for our university. Every dollar we can save is a dollar that gets invested somewhere else."

4.  Data Center Innovation Demands Better Communication

Continued innovation in computer technology is pushing the facility management and information technology (IT) departments closer together. As servers become more compact, they take up less space, but at the same time they require many more kilowatts of energy to power them.

The appetite for energy creates the need for more space in the data center to house the facility infrastructure that keeps the computers from going down. In a data center, "white space" refers to the usable space, measured in square feet, where computer cabinets are housed. The amount of space needed by computers is getting smaller in part because servers are much thinner, allowing more to fit in less space.

"Companies can now incorporate blade servers, which can hold up to 42 servers per rack," says Paul E. Schlattman, vice president, mission critical facilities group, Environmental Systems Design. The new servers may now need only two racks where old servers would have needed 10.

Servers may also need less physical space because of virtualization, says Schlattman. Virtualization allows a server to run multiple platforms. In the past each server would use only 8 to 10 percent of its capacity, because it would run only a specific type of software. With virtualization, on the other hand, servers can run multiple platforms, "so now my server is running at 80 percent of its capacity," says Schlattman. "This also increases the need for power, because the (server) is running hotter." Although the blade rack requires more power than a normal rack, overall the data center will save both space and energy by using the blade rack.

Those IT-side developments have huge implications for the data center facility infrastructure. As density increases, so does the need for support infrastructure: power transformers, uninterruptible power supply (UPS) systems, computer room air conditioners (CRACs) and chillers, and air distribution systems. In the highest tier data centers, support infrastructure may occupy four to six times the amount of space needed to house the computers. The higher the kilowatt load the computers are supporting, the more the infrastructure will be needed.

The increases in density have been significant, says R. Stephen Spinazzola, vice president, RTKL Associates. "Ten years ago, 500 kilowatts [of power] was considered to be robust; today 1,000 to 5,000 kilowatts of power is robust," he says. Clearly, the amount of square footage needed, in terms of infrastructure, to support 5,000 kilowatts of power will be much greater than the space needed to support computers running on less power.

A single computer cabinet may have been powered by one kilowatt 10 years ago, but it now uses 50 kilowatts. "It is hard to distribute that much power and cooling in a small space," Spinazzola says. The most common mistake in legacy data centers is to keep "migrating technology," or increasing computing power in the same amount of space, without thinking about how the facility can supply all that power and cooling to support the increased IT load, Spinazzola says.


RELATED CONTENT:


data centers , data center resiliency , hurricane sandy

NFMT Vegas - Register Today!


QUICK Sign-up - Membership Includes:

New Content and Magazine Article Updates
Educational Webcast Alerts
Building Products/Technology Notices
Complete Library of Reports, Webcasts, Salary and Exclusive Member Content



click here for more member info.