4 FM quick reads on data centers
1. Seismic Risk Important Consideration in Data Center Design
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that seismic risk needs to be a consideration for data centers.
Earthquakes in Colorado and along the Eastern seaboard are reminders that seismic risk isn't simply an issue for California. What's more, good seismic design means strengthening both structural and non-structural components, such as fire sprinklers, emergency power and emergency communications. Structural components have received the lion's share of the attention in the past, but in recent years, non-structural components have been the subject of increasing focus.
Seismic compliance of nonstructural components is a complicated matter. Buildings in areas of high seismic activity have stringent requirements for mechanical and electrical non-structural components. Most areas of the United States, however, are exempt from seismic compliance.
The code trigger for many seismic requirements in both structural and non-structural components is the building's "seismic design category." The seismic design category is based on a structure's occupancy category and the severity of expected ground motion at the site.
Data centers — unless they are considered as part of essential facilities and are located in one of the four major seismic activity zones — are generally not subject to the most stringent seismic compliance requirements.
But just because seismic compliance isn't required for most data centers doesn't mean it's a good idea to ignore it. A robust design that can withstand seismic events can keep a facility from losing time and money to a data center outage. If emergency power systems continue to operate, they also help data centers by allowing the preservation of computer data to reduce financial risk and ensure business continuity during the actual quake and its aftershocks.
New HVAC Technologies Can Help Save Energy in Data Centers
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that taking advantage of new HVAC technologies can help manage power usage in data centers.
Leading companies are coming around to the use of new HVAC technologies and operating procedures such as air-side economization, evaporative cooling, and operating the data center under a wider range of temperatures.
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.
Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.
These systems are very energy efficient, as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers.
Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility, HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate.
Tracking PUE Can Help Create More Efficient Data Centers
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that power usage effectiveness, or PUE, can be a good metric for measuring data center efficiency.
Calculated by dividing total site load by IT load in kilowatts (kW), PUE is a good gauge of a facility's energy efficiency, but relying on it alone can be misleading. For example, a new data center designed and equipped for energy efficiency and future expansion which is not yet operating at full design load initially will have a poor PUE. PUE will also degrade if an owner installs new servers with more energy efficient power supplies in an existing data center.
But, as an overall snapshot of how efficient your data center is, PUE is effective. The goal of the "PUE Arms Race" is to drive down power usage effectiveness to 1.0, where the only energy used is the energy powering the computer.
There are three fundamental ways to improve the energy efficiency of a data center. One way is to install new computer equipment with more efficient power supplies, and this is often done as owners periodically refresh their computing equipment. Another is to implement on-site power generation, for example, cogeneration or solar power. These grand-scheme approaches are not often implemented today, but they have increasing potential as the technologies improve and their capital cost decreases.
The third approach is to design, engineer and operate data centers to maximize the efficiency of the building infrastructure. Whatever else an organization is doing, this is fundamental to improving energy efficiency. Here is a look at some of the leading trends in mechanical and electrical systems. Many of the techniques being implemented by data center owners and engineers have an established track record in non-critical facilities. With data center energy costs escalating, these techniques are making their way into the mission-critical arena.
Consider the Cloud When Expanding Data Centers
This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that cloud services are becoming a more common option for data center expansion.
The major advantage of expanding to the cloud is that the organization pays only for access to the services it needs to meet its business objectives, not for ownership of capital assets. The approach offers an immediate, scalable solution for a monthly or annual fee.
Although cloud services are touted as a new concept, they have actually been available for more than 50 years under different terms, including time sharing and partitioning. Today, cloud services take three major forms: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS): SaaS is the most popular form of cloud services. A service provider offers software to support the end user's business. The end user can configure the software to suit their needs, although they cannot change or modify it.
PaaS offers a platform to clients for various purposes. For example, Microsoft Windows Azure offers a platform for developers to build, test and host applications that can be accessed by the end users.
IaaS offers infrastructure on demand ranging from storage servers to applications to operating systems. For example, Microsoft Office 365 provides applications and storage space. IaaS enables an organization to save on the capital costs, space and staff it takes to set up and maintain in-house infrastructure.
While cloud computing can be an option for data center expansion, remember that the decision will hinge on business analysis. Apply core financial analysis and weigh the payback period, net present value and internal rate of return of cloud computing versus other expansion options. At the end of the day, the technology solution must support the business.