Critical Facilities Summit

4  FM quick reads on data center

1. Higher power loads need more infrastructure


Today's tip is to know the needs of denser data centers. As servers become more compact, they take up less space, but require much more energy. "Companies can now incorporate blade servers, which can hold up to 42 servers per rack," says Paul E. Schlattman, vice president, mission critical facilities group, Environmental Systems Design. The new servers may now need only two racks where old servers would have needed 10.

Servers may also need less physical space because of virtualization, says Schlattman. In the past, each server would use only 8 to 10 percent of its capacity, because it would run only a specific type of software. With virtualization, on the other hand, servers can run multiple platforms, "so now my server is running at 80 percent of its capacity," says Schlattman. "This also increases the need for power, because the (server) is running hotter."

As density increases, so does the need for support infrastructure: power transformers, uninterruptible power supply (UPS) systems, computer room air conditioners (CRACs) and chillers, and air distribution systems. In the highest tier data centers, support infrastructure may occupy four to six times the amount of space needed to house the computers. The higher the kilowatt load the computers are supporting, the more the infrastructure will be needed.

"Ten years ago, 500 kilowatts [of power] was considered to be robust; today 1,000 to 5,000 kilowatts of power is robust," says R. Stephen Spinazzola, vice president, RTKL Associates. A single computer cabinet may have been powered by one kilowatt 10 years ago, but it now uses 50 kilowatts. "It is hard to distribute that much power and cooling in a small space," Spinazzola says. The most common mistake in legacy data centers is to keep increasing computing power in the same amount of space, without thinking about how the facility can supply all that power and cooling to support the increased IT load, Spinazzola says.

Avoiding that problem requires the facility management and IT departments to be in close communication. The facility department needs to know about IT plans that could increase density, while IT has to be informed about how much capacity the infrastructure has.


2.  Experts Conclude Data Centers Can Be Warmer

Today's tip is to consider a warmer temperature in your data center. Engineers are working on more sophisticated technologies for both computers and cooling equipment.

In the early days of data centers, computer equipment was kept at very cool temperatures — from 68 F to 70 F. These temperatures were mandated by computer manufacturers, who would not guarantee their equipment at higher temperatures. However, in 2008 the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) revised the guidelines to temperature band of 64.4 F to 80.6 F. "What is so powerful about these new recommended ranges is that they apply to legacy IT equipment," says Don Beaty, president of DLB Associates.

Using the higher end of the new temperature ranges can significantly reduce cooling energy use in chilled water systems. "Normally the water in the chiller would be cooled to 44 degrees, but if they take it to 50 degrees, it requires half of the energy," says Jim McEnteggart, vice president of Primary Integration Solutions.

Some data centers are running at only slightly higher temperatures, says Paul Mihm, executive vice president, technical services group, of Rubicon, "because the time to reach critical temperature is significantly shorter in the event of a failure." One solution, says Mihm, is to have a backup system to exhaust hot air out of the space. Most environments could withstand a short period of heat until the backup system kicked in, he says.
Another option is thermal energy storage using tanks of cold water. In water-cooled systems, the tank could be used to pump cold water through the system very quickly when a facility switches to emergency power.
While the type of data center will, in large part, determine tolerance of the new ASHRAE standards, geography also plays a role, since a hot climate will likely have a lower threshold for higher temperatures. If the data center needs to be rapidly cooled off, the outside air will not be much help, so there is less margin of error.

3.  Ways to protect data centers from fire

Today's tip is to make sure your data center is adequately protected from fire. In the past, of course, data centers could use Halon to put out an electrical fire. But, once it became known that Halon was destroying the ozone layer, it was phased out for new systems.

Halon alternatives generally fall into two categories: clean agent systems, many of which use halocarbons, and inert gases. Clean agent systems extinguish fires by removing heat. Inert gases essentially suffocate the fire by depriving it of oxygen. Both can be "excellent, reliable systems," if they are properly designed and commissioned, says Scott Golly, senior fire protection engineer at Hughes Associates. Inert gas systems use a higher concentration of gas to extinguish a fire than halocarbon systems, so they require more storage space.

Any facility using a "dry," gaseous product for fire suppression must also have a water-suppression system, according to Kevin J. McCarthy, vice president of engineering company EDG2. But using water in a data center "can cause catastrophic damage to equipment," Golly says.

The sensitivity of conventional sprinklers may justify a pre-action sprinkler system, which requires multiple events for pipes to flood with water. A pre-action sprinkler has a large valve at the back of the water supply, so the pipes are empty.

A double interact system uses a clean agent to put out a fire long before a smoke-head is set off. A pre-action system may also require the activation of two smoke detectors in two different zones before a deluge valve opens to fill the pipes. The clean agent or inert gas fire suppression is designed to put out the fire before the sprinkler head begins dropping water.

Facility managers must evaluate what would constitute an acceptable loss. "Can you afford to have all those computers taken off line for several weeks?" Golly says. "If you cannot, then you cannot rely solely on sprinklers." Other questions to consider include storage space and cost. Both clean agent and inert gas systems are more expensive than pre-action systems.

4.  FM And IT Need To Work Together

Today's tip is to involve both facility management and IT in designing a new data center. With input from both groups, design decisions can benefit both sides.

When FM and IT don't work together, the most common problem is over-built mechanical and electrical infrastructure. Too much UPS, too many generators and excessive precision cooling is installed on Day 1. In addition, the oversized equipment operates very inefficiently and reliability expectations may not be achieved.

The IT group may have critical applications that require a higher level of reliability than facility management plans to build. Oftentimes, one of the biggest issues is the need for concurrent operation of the data center and maintenance, or as the industry calls it, concurrent maintenance.

FM and IT need to agree on a set of performance objectives and success measures. Learn to communicate free of jargon, using easy-to-understand terms and descriptions. Describe the challenges each side faces, and interdependencies between both disciplines. Facility management and IT also need to jointly educate themselves about risk analysis, assessment and mitigation, in order to explain to each other what can happen under various scenarios. Necessary steps include going beyond the Uptime Institute Tier and other rating systems; exploring failure rates and their effects; and considering different ways to address risks, such as operations and maintenance improvements.

All of these considerations should be communicated to the on-site facility management and all shifts of the IT staff. If a problem occurs, both IT and facility management need to know. Finally, the team should conduct a post-event evaluation to determine the cause and prevent it from happening again.

Ten years ago, reliability was the top priority for data center design and operation. Now, cost to build and cost to operate are equally important. Solutions have to be scalable to allow for critical power and cooling to be installed in increments to match the growth in IT build-out. This cannot happen without close coordination between facility management and IT.


RELATED CONTENT:


data center , power , cooling

Critical Facilities Summit
Upsite Technologies


QUICK Sign-up - Membership Includes:

New Content and Magazine Article Updates
Educational Webcast Alerts
Building Products/Technology Notices
Complete Library of Reports, Webcasts, Salary and Exclusive Member Content



click here for more member info.