Critical Facilities Summit
 


Developing An Energy-Saving Strategy For Your Data Center

Between the power needed to run the computer equipment itself and the energy used to heat and cool it, data centers can be intensive users of energy. "In a data center, energy typically is the highest cost," says Dale Sartor, a staff engineer with Lawrence Berkeley National Laboratory, a Department of Energy National Lab.

The amount of energy used by data centers captured policymakers attention in 2007, when the Environmental Protection Agency (EPA) published its "Report to Congress on Data Center Efficiency," says Don Beaty, founder of DLB Associates, an engineering firm. In the report, the EPA estimated that by 2011, the country's data centers and servers would consume 100 billion kWh (kilowatt hours) of energy.

Two categories of energy savings usually are possible in a data center, Sartor says. One results from increasing the efficiency of the systems themselves, while the other comes from boosting the efficiency of the infrastructure supporting the IT equipment.

The IT Load

For starters, it's not uncommon for a server to be operating at less than 10 percent of its capacity. That's because the individual or department working with an application often wants a dedicated server – what some call "server hugging," Sartor adds. That is, the individual in charge wants to see and touch the machine that will house the application.

What's more, some estimate that up to 20 percent of servers aren't used at all, Sartor says. This happens when an application no longer is used by the organization, but the server that ran it remains.

This wastes energy. "Servers that are spinning use about as much in idle mode as when they're doing computations," says Alexis Karolides, a principal in the buildings practice with Rocky Mountain Institute, a nonprofit based in Snowmass, Colo. that is focused on driving the efficient and restorative use of resources.

Improving the utilization rate of the servers in the data center to about 60 to 70 percent can be done through virtualization, Sartor says, which allows one server to run multiple operating systems on a single machine at the same time. Some virtualization packages can transfer the operating systems between servers, in the event one starts consuming more energy; some also can move applications between data centers, if one center's operation is interrupted.

Virtualization produces energy savings because most data centers are designed with a great deal of redundancy in case of server failure, Sartor says. While data center reliability remains critical, the ability to move applications reduces the need for redundancy. "You can have redundancy in the network," Sartor says.

If the servers in a data center are older, it may make sense to consider replacing them with new, more efficient models. Servers' efficiency levels typically double every several years, Sartor says.

Data Center Infrastructure

A more efficient data center requires determining your current IT load in detail, as well its fluctuations, Beaty says. If the fluctuations are significant, you may need to map out the load on an hour-by-hour basis to see how the variations will impact the power and cooling system you actually need at any given time. Over time, many data centers have become "fat and over-designed," Beaty says, mostly due to a focus on temperature and humidity control.

One measure of this is the PUE, or power utilization effectiveness ratio. This is the ratio of the total amount of power coming into a data center divided by the amount used by the computing equipment itself. The ratio at many data centers has been around 2, says Sartor. So, twice as much power comes into a data center as is actually used by the IT systems.

One way to cut the PUE is through the layout of the server racks. For years, the racks in many data centers were configured so that the hot air from one moved right into the cold air from another, forcing it to work harder. That's since changed in most data centers, Beaty notes, with the racks usually lined up so that the front of one faces the front of the other, while two backs also face each other, effectively creating hot and cold aisles. "It significantly improves the efficiency of the cooling system," he adds.

More can be done, however. Beaty points out that the hot air from one row of servers can flow over the top of the next row and into the cold aisle, still resulting in wasted energy. "The isolation (of the hot air) needs to be 100 percent," he says. One way to boost isolation is by extending the server racks all the way to the ceiling, creating a physical barrier between the hot and cold aisles. Another option is to put return air ducts over the hot aisles, Beaty adds.

In some data centers, these are relatively simple projects. However, they can get more complicated if, for instance, installing walls to isolate the hot and cold air interferes with a sprinkler system. Before erecting isolation walls, it may be necessary to rework the sprinkler system.

Still, the effort can be worth it. Effectively isolating the hot and cold air can cut energy use by up to 25 percent, Beaty says.

With some of the newer servers, it's also possible that the humidity level in the data center can be allowed to reach higher levels than in the past, when punch cards were used in data centers, Sartor says. Many of today's servers can also operate at higher temperatures than previous models.

By boosting the efficiency of their data centers' infrastructure, some very large centers have achieved PUEs below 1.1, says Karolides. In other words, almost all the power coming into the center is used by the IT equipment itself. While this level may be beyond the reach of a typical data center, a significant reduction often is possible.

Modeling

Of course, before making any changes, it's critical to model the likely impact, perhaps using modeling tools like computational fluid dynamics, which uses algorithms to solve problems involving fluid flows. (In mechanical engineering, air is considered a fluid.) "You want to save energy without hurting uptime," Beaty notes.

That's key, as a significant obstacle to boosting data center efficiency is the centers' mission–critical role. Any changes can't increase the risk of failure, and those involved in data center operations need to know the modifications aren't risky; otherwise, they're likely to balk.

Another obstacle can be the typical lack of interaction between the facilities and IT departments. Boosting a center's efficiency typically requires the two to work together. For instance, facilities may need to get involved in equipment purchases, to ensure efficient models are considered, Karolides says.

Finally, it's not unusual to meet resistance to the idea of change itself. Overcoming this usually requires working across departments to gather input, ideas and buy–in before moving forward. A focus on the potential savings – reductions of 30 to 50 percent aren't unheard of, Karolides says – also can bring around any objectors.


Learn more and get product information from Schneider Electric


 
Critical Facilities Summit


QUICK Sign-up - Membership Includes:

New Content and Magazine Article Updates
Educational Webcast Alerts
Building Products/Technology Notices
Complete Library of Reports, Webcasts, Salary and Exclusive Member Content



click here for more member info.