4  FM quick reads on data centers

1. Monitoring Is Key To Data Center Efficiency


The key piece of an energy-efficient program in data centers is monitoring, says Jason Yaeger, director of operations of Online Tech.

"If you are not monitoring your IT load and what your critical infrastructure is using — if you have something that is using more electricity than it should — you will not know unless you are monitoring on a daily basis. We first invested money in monitoring," he says.

New data centers have the opportunity to incorporate energy efficiency into the original design, which is what the University of Arkansas for Medical Sciences, Little Rock, did with its 12,944 square foot primary data center that opened in December 2010. The building has an Energy Star rating of 77, says Jonathan Flannery, executive director of engineering and operations, campus operations. The 54-building campus has been an Energy Star partner for five years and added the new facility to its existing portfolio.

In order to separate the data load for monitoring, a modem on the UPS tracks the energy used by the data center and reports it to the building management system. Some of the energy-saving features being used in the data center include using outside air to cool the building instead of chilling the air with air handlers. The data center also uses other typical technologies such as hot water controls, which are reduced during the evening, automated lighting controls and occupancy sensors, and a digital power management system.

The program dates back to at least 2000, when "we were able to get funding and could make major impacts to our program." Energy efficiency is important, said Flannery, because "to make a dollar at a hospital, it requires a lot of work."

Every dollar the hospital makes costs 70 cents, according to Flannery. "If I don't have to spend a dollar on utilities, we can save a dollar that can go somewhere else—to patient care, buying a new MRI, education for our university. Every dollar we can save is a dollar that gets invested somewhere else."


2.  Data Center Innovation Demands Better Communication

Continued innovation in computer technology is pushing the facility management and information technology (IT) departments closer together. As servers become more compact, they take up less space, but at the same time they require many more kilowatts of energy to power them.

The appetite for energy creates the need for more space in the data center to house the facility infrastructure that keeps the computers from going down. In a data center, "white space" refers to the usable space, measured in square feet, where computer cabinets are housed. The amount of space needed by computers is getting smaller in part because servers are much thinner, allowing more to fit in less space.

"Companies can now incorporate blade servers, which can hold up to 42 servers per rack," says Paul E. Schlattman, vice president, mission critical facilities group, Environmental Systems Design. The new servers may now need only two racks where old servers would have needed 10.

Servers may also need less physical space because of virtualization, says Schlattman. Virtualization allows a server to run multiple platforms. In the past each server would use only 8 to 10 percent of its capacity, because it would run only a specific type of software. With virtualization, on the other hand, servers can run multiple platforms, "so now my server is running at 80 percent of its capacity," says Schlattman. "This also increases the need for power, because the (server) is running hotter." Although the blade rack requires more power than a normal rack, overall the data center will save both space and energy by using the blade rack.

Those IT-side developments have huge implications for the data center facility infrastructure. As density increases, so does the need for support infrastructure: power transformers, uninterruptible power supply (UPS) systems, computer room air conditioners (CRACs) and chillers, and air distribution systems. In the highest tier data centers, support infrastructure may occupy four to six times the amount of space needed to house the computers. The higher the kilowatt load the computers are supporting, the more the infrastructure will be needed.

The increases in density have been significant, says R. Stephen Spinazzola, vice president, RTKL Associates. "Ten years ago, 500 kilowatts [of power] was considered to be robust; today 1,000 to 5,000 kilowatts of power is robust," he says. Clearly, the amount of square footage needed, in terms of infrastructure, to support 5,000 kilowatts of power will be much greater than the space needed to support computers running on less power.

A single computer cabinet may have been powered by one kilowatt 10 years ago, but it now uses 50 kilowatts. "It is hard to distribute that much power and cooling in a small space," Spinazzola says. The most common mistake in legacy data centers is to keep "migrating technology," or increasing computing power in the same amount of space, without thinking about how the facility can supply all that power and cooling to support the increased IT load, Spinazzola says.

3.  FM And IT Need To Work Together

Today's tip is to involve both facility management and IT in designing a new data center. With input from both groups, design decisions can benefit both sides.

When FM and IT don't work together, the most common problem is over-built mechanical and electrical infrastructure. Too much UPS, too many generators and excessive precision cooling is installed on Day 1. In addition, the oversized equipment operates very inefficiently and reliability expectations may not be achieved.

The IT group may have critical applications that require a higher level of reliability than facility management plans to build. Oftentimes, one of the biggest issues is the need for concurrent operation of the data center and maintenance, or as the industry calls it, concurrent maintenance.

FM and IT need to agree on a set of performance objectives and success measures. Learn to communicate free of jargon, using easy-to-understand terms and descriptions. Describe the challenges each side faces, and interdependencies between both disciplines. Facility management and IT also need to jointly educate themselves about risk analysis, assessment and mitigation, in order to explain to each other what can happen under various scenarios. Necessary steps include going beyond the Uptime Institute Tier and other rating systems; exploring failure rates and their effects; and considering different ways to address risks, such as operations and maintenance improvements.

All of these considerations should be communicated to the on-site facility management and all shifts of the IT staff. If a problem occurs, both IT and facility management need to know. Finally, the team should conduct a post-event evaluation to determine the cause and prevent it from happening again.

Ten years ago, reliability was the top priority for data center design and operation. Now, cost to build and cost to operate are equally important. Solutions have to be scalable to allow for critical power and cooling to be installed in increments to match the growth in IT build-out. This cannot happen without close coordination between facility management and IT.

4.  DCIM Offers Benefits In Legacy Data Centers

This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that Data Center Infrastructure Management offers benefits in legacy data centers.

Where there is an existing plant, a middleware DCIM system is ideal. Configured and placed on the Industrial Ethernet or IT network, the object of the system is to listen on the wire for any pre-defined data or receive traps from legacy management/monitoring systems and report or archive accordingly.

The benefits of this type of installation include using existing BMS/BAS/EPMS (emergency power management system) and network management systems and less installation time since the points at the far end are already connected. However, many drawbacks exist. Those drawbacks include: the risk that custom software will be have to be developed to fit existing systems; incomplete data gathering based on the possibility that the building's legacy 'tool' cannot integrate; the potential that staff may tire of the system prior to full implementation and shelve it; concern that the DCIM will be yet another platform to increase operational expenses; and the probability that something on the raised floor will change during DCIM implementation, rendering Day 1 data out of date.

Don't let the drawbacks weigh too heavy, though. Tying existing systems into a central point of collection wisely capitalizes on the existing investment in management systems and enables cross-system data sharing. But be wary when checking into DCIM or middleware. Ask a lot of questions and provide the vendors with a list of the systems for desired integration.

Questions should include the obvious: Can you integrate with everything on my list? What protocols have you successfully integrated with? What systems have you successfully integrated with? Can I use this to tie not only one, but multiple data centers together?

Be prepared for a lot of "vaporware." Middleware and DCIM at this level of integration is an emerging field and vendors will make promises that the next release will contain everything. Don't expect an out-of-box solution from any of them — there are just too many types of systems. Consider creating a bubble diagram that shows the existing systems by manufacturer name and function and reveals any existing relationship between the two, as well as the desired future relationships. This can go a long way toward illustrating the desired equipment.


RELATED CONTENT:


data centers , energy efficiency , UPS , energy star



QUICK Sign-up - Membership Includes:

New Content and Magazine Article Updates
Educational Webcast Alerts
Building Products/Technology Notices
Complete Library of Reports, Webcasts, Salary and Exclusive Member Content



click here for more member info.