All fields are required.
Energy Star certified buildings meet strict energy performance standards, which must be demonstrated through actual energy bills during a full year of operation. To this end, EPA has developed the 1-to-100 Energy Star energy performance scale. The performance scale "provides an assessment of a building's energy efficiency, as compared with similar buildings nationwide," explains EPA. The scale also is adjusted for climate and business activities.
Data centers in the top quartile — a score of 75 or higher — may be eligible for certification. To earn that certification, the data center must have their energy bills verified by a professional engineer or registered architect and submit an application to EPA.
The process to Energy Star certification is not an easy one. Generally, it requires a commitment from the data center boardroom, as is the case for RagingWire Data Centers, which received its second Energy Star certification for its second Sacramento, California, (CA2) colocation facility.
"We have pushed hard to lead the colocation industry in energy efficiency and availability," says George Macricostas, RagingWire chairman and CEO, "and we will continue in this effort as we expand our facilities."
Jean Lupinacci, chief of the Energy Star program for commercial buildings and industrial plants, recently recognized RagingWire's Energy Star achievement for its second Sacramento data center.
"By analyzing their energy performance with EPA's Portfolio Manager and applying best practices learned through their first Energy Star certified data center, RagingWire demonstrates the value of a portfolio-wide approach to saving energy and reducing greenhouse gas emissions," says Lupinacci.
To achieve the Energy Star certification, RagingWire looked at many physical infrastructure elements, analyzing options to improve efficiencies and drive down overall energy consumption.
For example, dynamic fan speed controls on computer room air handlers (CRAHs) and programmable logic controls (PLCs) match the cooling being delivered to the data floor with the cooling required to maintain the top rack, cold aisle intake temperature. Increased waterside and airside economization are being used so that the electrical room can be cooled with free outside air.
Additional energy savings resulted from installation of high-capacity adiabatic humidification units, as opposed to traditional gas-fired steam humidifiers. The adiabatic devices use heat from the air for evaporation, reducing air temperature.
The cooling system for the Sacramento campus is a centralized chilled water plant with redundant modular cooling units, onsite fresh water wells, cooling loops and multiple cooling towers. Chilled water is pumped beneath the data center's raised floors.
RagingWire also worked with its building automation system vendors to improve automatic controls for the chiller plants and CRAHs. HVAC plant pumps were retrofitted with Schneider Electric variable frequency drives (VFDs) for increased energy efficiency. The results are an optimized cooling flow to the data room floor and increased overall cooling efficiency.
Many of the energy efficiency measures in CA2 proved themselves in RagingWire's first Sacramento (CA1) data center, which achieved Energy Star certification in 2011 following an efficiency-driven retrofit. CA 1 and CA 2 contain more than 500,000 square feet of data center space and 38 megawatts (MW) of critical IT power on their collocation campus. Cost effective and energy efficient improvements also were implemented in RagingWire's newest 150,000 square foot center in Ashburn, Va.
Learn more and get product information from Schneider Electric
Get monthly email alerts featuring project advice from the field.
The new Planet & Society barometer reaches 4.66/10 in Q2 2015, above its year-end target
Wed, 29 Jul 2015 00:00:00 +0100
Schneider Electric rolls out an off-grid electrification solution for remote communities in Myanmar
Tue, 21 Jul 2015 00:00:00 +0100
Schneider Electric and Cisco partner to build resilient control system networks
Tue, 30 Jun 2015 00:00:00 +0100