Building Operating Management

Getting a Handle on Server Energy



Soaring energy costs push industry to create metrics for energy use in data centers


By Brandon Lorenz, Senior Editor   Energy Efficiency

Data centers are the facility equivalent of a 1967 Corvette. Most vintage Corvette owners probably know that their cars only get 10 mpg on a good day. Odds are they don’t care. They just want to go fast.

Most IT executives look at data centers the same way. They don’t care how much energy their servers use. They just want their data centers to run reliably and securely.

Reasons for the situation abound. Saddled with a crowded agenda that includes fostering growth, maintaining reliability and avoiding security breaches, data center operators can quickly lose sight of efficiency.

Equally problematic, IT departments are usually not charged within a company for the energy consumed by a data center. Instead, the cost is often borne by the facility department.

Because IT departments don’t pay their fair share of the cost to run a data center, energy efficiency isn’t a priority for them. What’s worse, many companies don’t even have a clear idea how much energy their data centers are using.

“Right now the cost of inefficiency isn’t centralized,” says Andrew Fanara, director of ENERGY STAR product development. “It’s spread out over a business enterprise. The cost is increasing all the time, but it’s diluted and masked and the CFO can’t see it.”

That lack of awareness points to a curious fact. Despite being one of the most energy-intense spaces to manage, there are no commonly accepted standards for measuring data center energy use.

Those who manage office space can use the ENERGY STAR for Buildings tool to compare the energy efficiency of buildings across their portfolio. The U.S. Green Building Council’s LEED rating system can rank a building’s overall sustainability.

But the ENERGY STAR tool doesn’t work for data centers, and while it’s possible to pursue LEED certification for data centers, few such examples exist. While ASHRAE’s TC9.9 addresses the operation of data centers, some data center operators ignore the guidelines and run their data center cooler than needed because of perceived reliability concerns.

Lack of standards is a problem that filters down to the component level. For example, while ENERGY STAR has had a personal computer specification for 15 years, there is still no specification for servers.

Lack of standards can’t obscure an unpleasant fact: Data center energy use is on an unsustainable path. A study on data centers published by the EPA in August concluded that data centers consumed 1.5 percent of the nation’s electricity in 2006. At 61 billion kwh per year, that’s an amount equal to the nation’s entire transportation manufacturing industry, or 5.8 million average households.

Even more troubling is the growth. Electricity consumption in data centers has more than doubled since 2000. If the trend continues unchecked, electricity use would grow to 124.5 billion kwh per year in 2011, according to EPA projections. That would mark a fourfold increase from 2000.

With server prices approaching commodity levels, IT departments are often tempted to simply pack more servers into data centers. More servers mean more energy use. But that’s only part of the energy problem. Even as data centers have grown more crowded, new servers are consuming more energy than previous models. Hence the need for an ENERGY STAR specification for servers.

It’s no easy task, in part because the term “server” can refer to a variety of different devices. Among the different classes of equipment: volume servers, mid-range servers and high-end servers. Network storage devices also need to be taken into account.

“The common language really needs to be defined,” says Fanara. “We don’t really have industry definitions as much as we do shorthand.”

Volume servers are the data center workhorses. The average volume server used 225 watts in 2006, up from 186 in 2000. Energy use in high-end servers grew even faster, 8,163 watts in 2006 compared to 5,534 in 2000, according to EPA.

Servers can be measured in two ways: processing performance and energy efficiency. Improvements are being made on both fronts. Unfortunately, the performance of servers is growing much faster than improvements in efficiency. That means more heat is generated. “That is a fundamental roadblock. At some point these products will end up cooking themselves unless there is a radical change in efficiency,” says Fanara.

More Efficient Servers?

Fortunately there are signs of progress on the horizon. Climate Savers Computing is an association of IT suppliers. One goal of the association is improve efficiency of a server’s power supply. In the average server, power supply losses represent 15 percent of a server’s total energy consumption, according to EPA.

Climate Saver’s voluntary program calls on server manufacturers to gradually increase the efficiency of power supply units annually from now until 2010. The program calls for setting a standard of 85 percent efficiency until June 2008. Then the standard would gradually rise to 92 percent.

The cost of the improvements is expected to add about $30 to the cost of a server, says Pat Tiernan, vice president, social and environmental responsibility for HP and a board member of Climate Savers. “The ROI is there today,” Tiernan says. “Even if it costs you $20 or $30 at the server level, you will easily make that back.”

This points to the need for data center operators to consider total cost of ownership (TCO) when purchasing IT equipment. By 2008, EPA projects that energy costs will exceed the capital it takes to purchase a server. Using TCO to guide purchasing decisions is the single best decision data center operators can do to drive down data center costs, EPA says. Creating an ENERGY STAR standard for servers is meant to make it easier to determine TCO for servers.

“One major message I’ve gotten is that we need to help organizations develop a better model of TCO that gets the attention of the C-suite because it helps drive change in an organization,” Fanara says.

But a standard isn’t likely to materialize overnight. “I think this is one of the most challenging products we have dealt with,” Fanara says.

Equally important for facility executives would be a standard that goes beyond individual components to examine a data center’s total energy use. That’s where an organization called the Green Grid comes in. Green Grid is an association of hardware and software companies focused on reducing data center energy use.

Green Grid has proposed a standard known as power use effectiveness (PUE). PUE is calculated by dividing a facility’s total power usage by the power used by IT equipment. The current range is wide: 1.0 to 3.0.

“The question you are trying to answer is, ‘How much power did I consume on a unit basis to deliver useful power to the customer on the floor?’” says Jim Smith, vice president of engineering for Digital Realty Trust.

Unless a data center was built with efficiency in mind, a PUE of 3.0 wouldn’t be unusual. That means that if a server alone consumes 500 watts of energy, it actually devours 1500 watts from the grid when the entire infrastructure is factored in.

If a facility executive wants to use PUE to manage a data center, there are a few things to keep in mind. First, don’t expect a static number. Because server load is not always constant, the efficiency and heat generated will vary. That in turn means the load on the cooling system will fluctuate. Cooling load can also vary based on the time of year as temperature and humidity changes. PUE will also vary based on geographic location for the same reasons.

“A universal approach simply won’t work,” says Bill Kosik, managing principal at EYP Mission Critical Facilities. “This is more complex than any other industry in terms of developing a standard.”

That doesn’t mean facility executives should avoid using PUE. But it does mean the metric needs to be measured frequently. Digital Realty Trust uses sophisticated revenue-grade submeters to help automate the process. Some might not consider metering worth the cost. Smith counters by saying that in a $10 million data center project, the meters amount to $75,000.

Even without submeters, facility executives can manually measure consumption at the power distribution units and compare it to the overall electric bill. The key is to measure at the same time of day, Smith says.

PUE gives facility executives a tool for continuous improvement, much like the ENERGY STAR for Buildings program. At Digital Realty, Smith is watching the PUE and looking for gradual improvement.

Data Center of the Future

So if TCO, PUE and an ENERGY STAR specification for servers become common tools for managing a data center, how might they reshape those facilities?

To begin, companies that are focused on efficiency will become more strategic about how they design their data centers. Many companies already consider energy costs when deciding where to locate a data center.

Companies focused on efficiency will likely also begin to vary the cooling strategies employed based on location. Using air economizers to cool a data center is controversial. Some believe that particulates in the air could cause hard drives to fail sooner. Air economizers also add complexity to a control system. But as energy costs soar, they could become more common.

“It can be done. It’s not a foreign concept,” says Kosik. “I think this is about people’s perceptions, and rightfully so when you have tens of millions of dollars of equipment at stake.”

Adding to the momentum is a study released in May from Lawrence Berkeley National Laboratories. It concluded that fears of shorter equipment life in data centers using outside air appear overblown.

Other facilities may use water economization, where a chilled water loop is used for cooling and the loop is exposed to cold outside air, particularly in facilities with larger chillers in cold climates, Smith says. Still other facility executives may decide a cogeneration approach makes the most sense.

Data centers of the future are almost certain to get larger. While it’s not unusual for a company to have at least one data center incorporated into a larger office building, such setups lack economies of scale when it comes to power and cooling.

And companies may reach a point where it becomes difficult to expand or reconfigure data centers located in office buildings. For example, as power requirements grow, it can be difficult to add a second utility feed, says Smith.

Meanwhile, consolidation will appear on the component level. Traditionally, servers are only given one task. Virtualization allows a server to do multiple jobs simultaneously.

While it does come with some management challenges, combining the tasks of several servers that do only one job onto one improves utilization. Servers with higher utilization are more energy efficient.

Virtualization also gives a facility room to grow by reducing the number of servers in an existing data center. Using virtualization allowed the Postal Service to slash the number of servers it needed from 895 to 104, according to EPA. “It’s the next big step,” says Fanara.

If the industry adopts an aggressive mindset, data centers of the future could be extraordinarily efficient compared to today. EPA modeled five energy scenarios for data center energy use by 2011. Under the most aggressive scenario, projected energy use could drop to 33.6 billion kwh per year by 2011. That would represent a savings of 74 billion kwh per year, equivalent to $5.1 billion in costs.

The savings would also avoid 47 million metric tons of carbon dioxide compared to the current energy trend. But hitting that target won’t be easy. Because energy use amounted to 60 billion kwh per year in 2006, that scenario requires a net decrease in data center energy use, despite a growing number of servers.

Whether such an aggressive target can be hit is an open question. But it’s clear the efficiency problem needs to be attacked.

“Other businesses have been doing this kind of thing for years,” Smith says. “It’s kind of astonishing to me that data centers, which are historically so capital expensive, have not been managed with these kinds of metrics.”

 

Taking Action to improve efficiency now

Facing constantly increasing energy costs, some facility executives might not want to wait for an ENERGY STAR standard before tackling the efficiency of their data centers.

The good news is that there is plenty facility executives can do while waiting for an ENERGY STAR server standard to be released.

Hot/cold aisle configuration is recommended by ASHRAE. Make sure that the floor is properly configured as a data center ages. One risk is that the raised floor will pour cold air into a hot aisle because the space was reconfigured.

Airflow management is important. “It’s one of the primary things you can do, and it’s not about making dramatic changes,” says Bill Kosik, principal at EYP Mission Critical Facilities.

Kosik suggests taking the temperature of a data center with a temperature sensor to find the hot spots. Add perforated tiles where needed. Make sure air isn’t leaking around holes in the floor for cabling. Install blank-off plates in cabinets to control airflow.

Many data centers are overcooled in an attempt to fix hot spots. Fixing hot spots by adjusting airflow might allow the overall supply temperature to be gradually increased. Consider gradually increasing supply temperature on a trial and error basis, says Kosik.

It often takes years before a data center reaches full capacity. If the data center isn’t fully loaded, adjust fan and chiller output to match the load.

Consider asking the IT department: Are all the servers actually needed? It’s not unusual for IT to leave “dead” servers running, even though they aren’t performing useful work, says Andrew Fanara, director of ENERGY STAR product development.

Another question for IT departments: Has power management been enabled on the servers? Power management puts the servers into energy-saving mode when they aren’t being used. But most servers don’t use it, says Pat Tiernen, vice president, social and environmental responsibility for HP.

— Brandon Lorenz

 

Where Does Server Electricity Go?

CPU 80 Watts
Peripheral slots 50 Watts
Power supply losses 38 Watts
Memory 36 Watts
Motherboard 25 Watts
Disks 12 Watts
Fan 10 Watts
Total 251

Source: EPA Report to Congress on Server and Data Center Energy Efficiency




Contact FacilitiesNet Editorial Staff »

  posted on 11/1/2007   Article Use Policy

Comments