4 FM quick reads on data centers
1. UPS Systems, DC Power Can Solve Energy Issues
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that energy-saving UPS systems and DC power can help solve energy issues in data centers.
New UPS systems and high-voltage power supply are the two major trends in infrastructure design and engineering to reduce power consumption. DC power is a rare solution in the United States today, but worth considering the potential benefits and risks to save energy.
Manufacturers have modified the designs of new UPS systems with an "energy-saver" operating mode, which increases their power efficiency by approximately 90 percent while operating at any load. Owners have been cautious to adopt this new operating mode until the new systems prove themselves, but more owners are willing to consider it today.
Operating a UPS in energy-saver mode has clear advantages over the conventional operating mode of older UPS systems, whose efficiency falls into the 30 to 40 percent range at low loads. Even as the load increases on an older unit, it never achieves a higher efficiency level than about 80 percent.
High-voltage power supply is an effective way to cut capital costs and power requirements, and it is an idea whose time has finally come. Running at higher voltages not only reduces the capital cost of wiring as the system uses fewer, smaller wires, but at higher voltages, the current is lower, so less energy is lost through the wire. One downside is the fact that high-voltage computer equipment is still a custom order even though it is more available.
Running a data center on DC power saves energy by reducing the energy losses associated with the number of power conversions typically required in a conventional data center. Energy is lost at each conversion, making this approach less efficient than a DC system, which efficiently converts AC to DC once at high voltage and then distributes it to the computer's power supply.
Seismic Risk Important Consideration in Data Center Design
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that seismic risk needs to be a consideration for data centers.
Earthquakes in Colorado and along the Eastern seaboard are reminders that seismic risk isn't simply an issue for California. What's more, good seismic design means strengthening both structural and non-structural components, such as fire sprinklers, emergency power and emergency communications. Structural components have received the lion's share of the attention in the past, but in recent years, non-structural components have been the subject of increasing focus.
Seismic compliance of nonstructural components is a complicated matter. Buildings in areas of high seismic activity have stringent requirements for mechanical and electrical non-structural components. Most areas of the United States, however, are exempt from seismic compliance.
The code trigger for many seismic requirements in both structural and non-structural components is the building's "seismic design category." The seismic design category is based on a structure's occupancy category and the severity of expected ground motion at the site.
Data centers — unless they are considered as part of essential facilities and are located in one of the four major seismic activity zones — are generally not subject to the most stringent seismic compliance requirements.
But just because seismic compliance isn't required for most data centers doesn't mean it's a good idea to ignore it. A robust design that can withstand seismic events can keep a facility from losing time and money to a data center outage. If emergency power systems continue to operate, they also help data centers by allowing the preservation of computer data to reduce financial risk and ensure business continuity during the actual quake and its aftershocks.
New HVAC Technologies Can Help Save Energy in Data Centers
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that taking advantage of new HVAC technologies can help manage power usage in data centers.
Leading companies are coming around to the use of new HVAC technologies and operating procedures such as air-side economization, evaporative cooling, and operating the data center under a wider range of temperatures.
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.
Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.
These systems are very energy efficient, as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers.
Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility, HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate.
Tracking PUE Can Help Create More Efficient Data Centers
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that power usage effectiveness, or PUE, can be a good metric for measuring data center efficiency.
Calculated by dividing total site load by IT load in kilowatts (kW), PUE is a good gauge of a facility's energy efficiency, but relying on it alone can be misleading. For example, a new data center designed and equipped for energy efficiency and future expansion which is not yet operating at full design load initially will have a poor PUE. PUE will also degrade if an owner installs new servers with more energy efficient power supplies in an existing data center.
But, as an overall snapshot of how efficient your data center is, PUE is effective. The goal of the "PUE Arms Race" is to drive down power usage effectiveness to 1.0, where the only energy used is the energy powering the computer.
There are three fundamental ways to improve the energy efficiency of a data center. One way is to install new computer equipment with more efficient power supplies, and this is often done as owners periodically refresh their computing equipment. Another is to implement on-site power generation, for example, cogeneration or solar power. These grand-scheme approaches are not often implemented today, but they have increasing potential as the technologies improve and their capital cost decreases.
The third approach is to design, engineer and operate data centers to maximize the efficiency of the building infrastructure. Whatever else an organization is doing, this is fundamental to improving energy efficiency. Here is a look at some of the leading trends in mechanical and electrical systems. Many of the techniques being implemented by data center owners and engineers have an established track record in non-critical facilities. With data center energy costs escalating, these techniques are making their way into the mission-critical arena.