Know The Requirements For Modular Data Centers
With a modular approach, the entire data center is assembled on site from pre-fabricated modules. The equipment is pre-installed and wired, and only the connection to the utilities and module cross connection wiring and piping is required at the project site. Cabinets can be pre-installed with or without IT equipment. Facility managers thinking about building a modular data center should become familiar with requirements in a range of areas:
Designed with standards: There should be architectural design considerations in regards to efficient and flexible layouts, keeping the IT equipment protected from the elements during construction, operation and expansion (e.g. water tight), aesthetic considerations of the site and the owner’s desires, building code egress requirements and compliance.
Engaging in early discussions with the local authority having jurisdiction is beneficial and may reveal a requirement for inspections at the assembly factory by a third-party approved by the authority having jurisdiction in addition to any UL or ETL inspection label. The greatest risk lies in poor assembly, which could lead to air and water leaks. The more joints introduced into the modular data center, the greater the chance for the leaks to occur, and therefore, the less efficient the unit assembly becomes.
Energy efficiency: The PUE calculation, as defined by the Department of Energy and Green Grid, should be clearly stated, if it does not include the total energy required to operate the data center from the utility to the IT equipment. Declaring a low PUE that only includes a small portion of the total energy used is common and can be avoided by asking for complete documentation of how the unit’s PUE is calculated and what it does and does not include.
Third-party inspection and UL testing: The unit should meet the requirements of the proposed UL Subject 2755 Outline of Investigation for Modular Data Centers and the proposed NEC 2014 National Electrical Code Article 646, Modular Data Centers.
Flexible computer room size: The size of the computer room can be scaled appropriately with additional modules to meet the data center owner’s needs. The computer room should support cabinets, racks, storage units and mainframe-type computer equipment. It is recommended that the interior should be open and clear with minimal columns that may interrupt the layout of equipment. Clear height should be 11 feet or more to provide adequate space for overhead power distribution and cable trays, and raised flooring enables air distribution, cable distribution and flexibility of IT equipment layout. With proper planning in the design phase, additional modules can be added to existing operating units with minimal risk to the operating computer space.
Power distribution: Energy efficient and flexible power distribution that allows various kW equipment loads, types of load connections, ampacities and voltages should be used. 480/277V, 400/230V or 208/120V power to the IT equipment should be available. These systems should be designed to provide N+1 or 2N equipment and 2N power distribution to provide concurrent maintainability and fault tolerance if required. UPS systems, batteries and power distribution equipment should be installed in the modules and shipped to the site pre-assembled and wired, requiring minimal wiring on site upon arrival.
Temperature control: Cooling systems that provide concurrent maintainability and fault tolerance should be installed, if required. These systems should meet the latest energy codes, such as ASHRAE 90.1-2010 or IECC-2012, and the system should be flexible to cool the range of equipment that is to be installed. Hot/cold aisle containment should be used to improve the efficiency of the cooling system, and the design of the unit should allow for the addition of a refrigerant or water-cooling system to permit rear door heat exchangers, water-cooled mainframe equipment or direct-to-the-chip type cooling systems, if needed. These systems allow high temperature, low energy-use cooling systems with the minimization or elimination of chiller or compressor-type cooling systems. The base system should have economizer systems to minimize electrical use by the cooling systems during compatible weather conditions. Choosing air, water or pumped refrigerant economizers will lower the total energy used and the data center PUE.