fnPrime



data center diplomacy



Creating a high-performing data center requires teamwork and understanding between IT and facility executives


By Bob McFarlane  


Information technology — including the data center — has come to be recognized as a strategic component of the business enterprise. An outage, even a partial failure, can cripple a business. The costs of a major failure can be astronomical. It has been reported that as many as 70 percent of businesses that experience data center outages of 24 hours or more never recover. Is it any wonder IT professionals are so concerned about data center reliability when some factors may be out of their control?

Facility executives understand as well as anyone the critical importance of the data center and the consequences of a failure. When it comes to preventing failures, facility executives have the same goal as their peers in IT. Nevertheless, there is often a conflict between IT and facilities.

For example, IT is often excluded from the planning of data center projects. Why does that happen? The obvious answer is that IT doesn’t know the architectural and engineering design and construction process, can’t communicate its needs in realistic and meaningful terms, gets hung up on details that aren’t relevant to the project phase and wastes time the project can’t afford to lose. But IT has some valid observations as well. Here are some comments from working IT professionals who, along with facility professionals, are students in a course at the Marist College Institute for Data Center Professionals:

“Facilities feels that, once the server power and heat loads are determined, and there is a sense of how big the data center must be, IT should get out of the way and let them design and build it.”

“Facilities considers it their space, and we (IT) are just the tenants living in it, like being assigned an office.”

Facility executives complain that IT waits until the design or construction is practically done to identify the equipment they need. IT’s response:

“Facilities wants a list of all our equipment. We can do that, but it will change a lot before we move in. Then they’ll be upset when we tell them the data center’s not adequate. They just don’t realize how fast things change for us.”

Many attendees start off with the premise that IT is really the customer and that it is the responsibility of facilities to provide the environment IT needs to do its job. But as discussion continues, both sides realize that that can’t realistically happen if IT and facilities don’t communicate. The data center has become so complex, demanding and fast-changing that each entity must learn more than they ever imagined about the other’s business.

Even cooperation between facilities and IT isn’t enough. Finance (and, for a new facility, real estate) also needs to be involved in the initial planning to ensure that there is an understanding of the risks that accompany tradeoffs, as well as a high-level assessment of costs or potential savings against the business exposure.

Increasing Concern

The issue of cooperation between facilities and IT has been a topic at virtually every data center conference in the past two years. The reason? The business risks from an IT failure have become so enormous, and the mistakes of the past are too costly to repeat. Major new data centers can cost from $50 to $100 million to design and build — and that doesn’t count the millions of dollars in hardware that go into them. They cannot be created in a vacuum.

Teamwork will become even more essential over the next few years. According to Gartner, Inc., by 2008, 50 percent of current data centers will have insufficient power and cooling capacity to meet the demands of high-density equipment. Floor cooling systems that circulate cool air under server racks won’t be effective anymore because the heat is too great. This is the only cooling approach available in most data centers, and it is still the only cooling model most people in the industry know anything about.

In addition, AFCOM (the major association of data center professionals) predicted that by 2010, nearly 70 percent of all data centers will use some form of grid computing or virtual processing. These are highly demanding systems from a power and cooling standpoint. AFCOM also predicted that, over the next five years, power failures and limits on power availability will halt data center operations at more than 90 percent of all companies.

IT executives understand these issues because they are experiencing the problems first-hand in their own data centers or because they have been attending the conferences where experts are presenting the industry research. But only recently have even a small number of facility executives attended these events. Worse, many of the electrical and mechanical engineers, on whom facilities has traditionally depended to advise them and to design data center infrastructures, are equally out of date — a problem that the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) is trying to remedy through the work of TC-9.9 Technical Committee on Mission Critical Facilities, Technology Spaces and Electronic Equipment.

Forces of Change

The major topics of concern these days are power and cooling. One full cabinet of high-performance computing hardware can cost $1,000,000 and generate between 15,000 and 30,000 watts of heat. Data center power costs have now grown to the point where they could exceed the cost of the computer hardware (already significant) in the normal four- to five-year lifespan of most of today’s computer technology.

The consensus among IT executives is that most data centers built in the last five years will need to be replaced or significantly modified in the next five, or they will ultimately prevent organizations from absorbing needed technologies and limit the organization’s ability to grow.

Some facility executives wonder if IT departments are overreacting, asking for more than they need, and worrying too much about cooling their equipment. Many facility executives believe that the equipment housed in data centers is both inefficient and under-used and that IT is buying more equipment than is really needed.

The response from IT is that there’s not a lot that can be done about most of those problems right now. The future may be different, because escalating power costs and concerns about energy efficiency and the environment have pushed both new thinking and government action. But how many businesses can afford to wait to address cooling and power concerns in hopes that major changes will solve their problems?

The good news is that there are signs change will eventually come. The dramatically increased power consumption has caught the eye of the U.S. Congress. House Resolution 5646 (with a virtually identical measure passed by the U.S. Senate) was signed into law in December 2006. It requires the Energy Star program to study how much power is consumed by corporate and federal data centers, what the industry is doing to develop energy-efficient servers, and what incentives might convince businesses to use energy-saving technologies.

Manufacturers, both in computing hardware and in power and cooling, were working on this issue long before Congress latched onto it, but the IT industry wasn’t buying the more energy-efficient technology. That means it may take laws that mandate higher energy efficiency (likely at the expense of computing performance), or that impose monetary penalties for energy waste, to force change. To get ahead of the curve, it will be necessary for both IT and facilities to become much more aware, not only of each other’s problems and needs, but of the solutions that are available in both camps to mitigate the runaway demand for energy.

Working Together

IT and facilities operate on very different playing fields. Facility executives are concerned about the entire building, while IT executives must keep up with the next business demand for more far-reaching applications and speed. Is it realistic to get them to cooperate? Some large organizations have, and while most of them report that the relationship still isn’t perfect, they all report that there’s been improvement. How have these corporations gone about it? It’s been a multistep process:

  1. Both facilities and IT have to tell top management that joint operation is necessary and that it needs to be fully supported from above. In most organizations, this has meant an additional staff member or two in facilities, but when management understands the potential costs of a shutdown, and the operational costs of inefficiencies, they have accepted the additional cost.
  2. Facilities staff are dedicated to the data center — usually one person specializing in power and one in cooling. In the best scenarios, they take direction from the CIO or data center manager, and coordinate with the facility executive. (There have actually been reports of prospective CIOs refusing high paying positions unless dedicated facilities personnel report to them.)
  3. Facilities and IT attend applicable seminars and listen to vendor presentations together, so that both parties increase their understanding of where the industry is going, what has to be done to prepare for it, and what solutions are available today to address the problems.
  4. No piece of equipment is purchased or installed without both IT and facilities involvement. Facilities is then in a much better position to foresee coming infrastructure demands, to help obtain realistic power and cooling information (it often requires knowledgeable digging to get accurate information from vendors), to advise where in the data center a new machine can be best supported, to make sure the right circuits are used when it is installed, and either to convince management to fund additional power and cooling ahead of time or to advise management of the infrastructure costs a proposed new system will add to the price.

In short, facilities should be leading the push for improved power and environmental monitoring. And facilities should use tools like infrared thermal scanning and computational fluid dynamics (CFD) air flow modeling to identify potential problems and help IT get the most out of the infrastructure they have. By working with IT, instead of separate from them, facilities will better understand other important factors as well, like efficient operational layout and the importance of logical organization and clear labeling. They will see ways to make improvements that IT might never think of, and have the tools to implement them. Considering that 75 percent of data center outages are said to result from human error, anything that reduces the chances of throwing a wrong breaker or overloading a circuit should be a high priority.

There’s a great deal of knowledge and experience in both IT and facilities, but the two sides have different knowledge and experience. Put them together and the resulting synergy can be an enormous benefit to the business, as well as to the professional lives of everyone involved.

Robert E. McFarlane is a principal with Shen Milsom & Wilke, Inc., which provides consulting, design and technical expertise in IT/telecommunications, multimedia/audiovisual, building security and acoustics.




Contact FacilitiesNet Editorial Staff »

  posted on 3/1/2007   Article Use Policy




Related Topics: