New Content Updates
Educational Webcast Alerts
Building Products/Technology Notices
Access Exclusive Member Content
By Kenneth Brill
Data Centers Article Use Policy
Although a server running one application is a lot better than a server running no applications, the idea that each application should have a dedicated server is outdated. The approach arose for reasons of reliability: If a server with a Windows operating system failed, only one application would be affected. That was the directive from Microsoft, and that was the way many servers were configured. As a result, the utilization rate for servers is often between 5 and 9 percent.
But servers have become more robust than they were in the past, paving the way for a new approach known as virtualization. The concept is simple: Virtualization means running multiple applications on a single server. It boosts the combined utilization rate. Up to 30 servers can be consolidated onto one, with network storage also being increased. The result is big energy savings, even factoring in more power consumption for servers running consolidated applications and for extra storage.
Virtualization saved one European company $2.1 million a year in energy and avoided the need to spend $14 million to construct a new data center. Those savings were achieved by going from 3,100 servers to 150. The company also saved on network ports and cards and on labor for systems administration. IT also found that disaster recovery would be easier and that it could respond faster to demands from users for more capacity. In fact, virtualization was first justified by IT benefits; no one realized that the corporation could avoid having to build a new data center.
Virtualization is widely recognized by IT departments. Millions of virtualization licenses have been sold. But there is evidence that they are not being used. Why not? Because virtualization is a complex endeavor. Going from 3,100 servers to 150 took three years. The skilled workers needed to carry out virtualization are difficult to find. And this is not an area where mistakes are easily forgiven. Not all programs and servers are good candidates for virtualization. Servers that have erratic utilization rates are one example. What’s more, if the application and storage architecture is not done correctly, there can be a significant drop in availability.
Virtualization can therefore be a tall order for time-pressed IT departments. But the approaches mentioned earlier can be accomplished with little risk and favorable paybacks. Even if IT time is at a premium, the benefits of lower IT energy costs can be an incentive to act.
One caveat is that if IT equipment becomes more efficient, the facility may actually become less efficient. As the demand for cooling falls, the computer room air conditioning (CRAC) units will fall farther away from full load. Typically, that means their efficiency, measured in kilowatts per ton, will drop. Don’t worry. From a top management point of view, the less money spent on electricity for the data center, the better. If that number falls, no senior manager will ask why the CRAC units are running a little less efficiently.
Kenneth Brill is the founder and executive director of the Uptime Institute and the Site Uptime Network. He developed the concept of dual power in the early 1990s and was an originator of the industry’s Tier system for evaluating data center design.
To Trim Data Center Energy Costs, Disconnect Unused Servers
An Efficient Data Center Depends on Efficient Server Power Supplies
Virtualization Of Servers Can Slash Energy Use