4  FM quick reads on data centers

1. Data Center Innovation Demands Better Communication


Continued innovation in computer technology is pushing the facility management and information technology (IT) departments closer together. As servers become more compact, they take up less space, but at the same time they require many more kilowatts of energy to power them.

The appetite for energy creates the need for more space in the data center to house the facility infrastructure that keeps the computers from going down. In a data center, "white space" refers to the usable space, measured in square feet, where computer cabinets are housed. The amount of space needed by computers is getting smaller in part because servers are much thinner, allowing more to fit in less space.

"Companies can now incorporate blade servers, which can hold up to 42 servers per rack," says Paul E. Schlattman, vice president, mission critical facilities group, Environmental Systems Design. The new servers may now need only two racks where old servers would have needed 10.

Servers may also need less physical space because of virtualization, says Schlattman. Virtualization allows a server to run multiple platforms. In the past each server would use only 8 to 10 percent of its capacity, because it would run only a specific type of software. With virtualization, on the other hand, servers can run multiple platforms, "so now my server is running at 80 percent of its capacity," says Schlattman. "This also increases the need for power, because the (server) is running hotter." Although the blade rack requires more power than a normal rack, overall the data center will save both space and energy by using the blade rack.

Those IT-side developments have huge implications for the data center facility infrastructure. As density increases, so does the need for support infrastructure: power transformers, uninterruptible power supply (UPS) systems, computer room air conditioners (CRACs) and chillers, and air distribution systems. In the highest tier data centers, support infrastructure may occupy four to six times the amount of space needed to house the computers. The higher the kilowatt load the computers are supporting, the more the infrastructure will be needed.

The increases in density have been significant, says R. Stephen Spinazzola, vice president, RTKL Associates. "Ten years ago, 500 kilowatts [of power] was considered to be robust; today 1,000 to 5,000 kilowatts of power is robust," he says. Clearly, the amount of square footage needed, in terms of infrastructure, to support 5,000 kilowatts of power will be much greater than the space needed to support computers running on less power.

A single computer cabinet may have been powered by one kilowatt 10 years ago, but it now uses 50 kilowatts. "It is hard to distribute that much power and cooling in a small space," Spinazzola says. The most common mistake in legacy data centers is to keep "migrating technology," or increasing computing power in the same amount of space, without thinking about how the facility can supply all that power and cooling to support the increased IT load, Spinazzola says.


2.  FM And IT Need To Work Together

Today's tip is to involve both facility management and IT in designing a new data center. With input from both groups, design decisions can benefit both sides.

When FM and IT don't work together, the most common problem is over-built mechanical and electrical infrastructure. Too much UPS, too many generators and excessive precision cooling is installed on Day 1. In addition, the oversized equipment operates very inefficiently and reliability expectations may not be achieved.

The IT group may have critical applications that require a higher level of reliability than facility management plans to build. Oftentimes, one of the biggest issues is the need for concurrent operation of the data center and maintenance, or as the industry calls it, concurrent maintenance.

FM and IT need to agree on a set of performance objectives and success measures. Learn to communicate free of jargon, using easy-to-understand terms and descriptions. Describe the challenges each side faces, and interdependencies between both disciplines. Facility management and IT also need to jointly educate themselves about risk analysis, assessment and mitigation, in order to explain to each other what can happen under various scenarios. Necessary steps include going beyond the Uptime Institute Tier and other rating systems; exploring failure rates and their effects; and considering different ways to address risks, such as operations and maintenance improvements.

All of these considerations should be communicated to the on-site facility management and all shifts of the IT staff. If a problem occurs, both IT and facility management need to know. Finally, the team should conduct a post-event evaluation to determine the cause and prevent it from happening again.

Ten years ago, reliability was the top priority for data center design and operation. Now, cost to build and cost to operate are equally important. Solutions have to be scalable to allow for critical power and cooling to be installed in increments to match the growth in IT build-out. This cannot happen without close coordination between facility management and IT.

3.  DCIM Offers Benefits In Legacy Data Centers

This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that Data Center Infrastructure Management offers benefits in legacy data centers.

Where there is an existing plant, a middleware DCIM system is ideal. Configured and placed on the Industrial Ethernet or IT network, the object of the system is to listen on the wire for any pre-defined data or receive traps from legacy management/monitoring systems and report or archive accordingly.

The benefits of this type of installation include using existing BMS/BAS/EPMS (emergency power management system) and network management systems and less installation time since the points at the far end are already connected. However, many drawbacks exist. Those drawbacks include: the risk that custom software will be have to be developed to fit existing systems; incomplete data gathering based on the possibility that the building's legacy 'tool' cannot integrate; the potential that staff may tire of the system prior to full implementation and shelve it; concern that the DCIM will be yet another platform to increase operational expenses; and the probability that something on the raised floor will change during DCIM implementation, rendering Day 1 data out of date.

Don't let the drawbacks weigh too heavy, though. Tying existing systems into a central point of collection wisely capitalizes on the existing investment in management systems and enables cross-system data sharing. But be wary when checking into DCIM or middleware. Ask a lot of questions and provide the vendors with a list of the systems for desired integration.

Questions should include the obvious: Can you integrate with everything on my list? What protocols have you successfully integrated with? What systems have you successfully integrated with? Can I use this to tie not only one, but multiple data centers together?

Be prepared for a lot of "vaporware." Middleware and DCIM at this level of integration is an emerging field and vendors will make promises that the next release will contain everything. Don't expect an out-of-box solution from any of them — there are just too many types of systems. Consider creating a bubble diagram that shows the existing systems by manufacturer name and function and reveals any existing relationship between the two, as well as the desired future relationships. This can go a long way toward illustrating the desired equipment.

4.  Securing Co-location Data Centers

This is Casey Laughman, managing editor of Building Operating Management magazine. Today's tip is that co-location data centers offer unique security challenges.

Co-location data centers provide multiple customers with the ability to locate network, server and storage gear through a shared infrastructure, minimizing both capital and operational costs for users. With a number of tenants in a variety of space configurations, co-location data centers face a unique infrastructure security challenge. Because co-location data centers can be typically subdivided by cages or just by individual cabinets or IT racks, electronic access control is key.

Cages should be treated as rooms, with locks so that air conditioning is the only element shared. Tenants should gain access only to their own cage through an active card reader or similar equipment at the cage itself. For smaller clients that want just a cabinet or two, specify access control down to the cabinet level to provide individual access. This will allow security personnel to track who is in each space moment-to-moment. For example, if there are five clients in one area serving different racks, tracking who was where when something goes down will be streamlined.

Similarly, monitoring can be another function of the access control system in a co-lo data center. Personnel can monitor access to cages, cabinets and racks to determine who is in the building, which tenants have their doors open, closed, etc. By having a dedicated security IP network, the security team can maintain tight control over security communications and allow for 24/7/365 operation, which can be a great selling point to prospective tenants.


RELATED CONTENT:


data centers , facility management , information technology







QUICK Sign-up - Membership Includes:

New Content and Magazine Article Updates
Educational Webcast Alerts
Building Products/Technology Notices
Complete Library of Reports, Webcasts, Salary and Exclusive Member Content



click here for more member info.