- Facilities Technician »
- Facilities Property Coordinator »
- Palm Beach State College Opportunities »
- Facility Maintenance Manager »
- Custodial Assistant »
How to Apply Cold-Aisle Monitoring Strategies in Data Centers
OTHER PARTS OF THIS ARTICLEPt. 1: Air-Flow Management Strategies for Data Centers Pt. 2: Why Cold Aisle Air-Flow Management WorksPt. 3: This Page
Implementing the cold-aisle monitoring strategy is simple. All the steps described below follow the results set into motion by measuring the IT equipment loads:
The kW loads of cabinets can be obtained from power distribution units, remote power panels, smart power strips, etc. If no such devices are available, a good estimate of the number of servers and an estimate of the average power per server installed per cold aisle will usually net a result that is good enough as a starting point.
Note that the total kW loading per cabinet is not important. Whether the cabinet is loaded to 20 kW or 2 kW and whether the high power cabinets are placed right next to low power cabinets is irrelevant. Each server installed within the cabinets will take care of pulling in however much air it needs without influence from the adjacent cabinet.
Next, calculate the total amount of heat (in kW) being generated by the two rows of cabinets that define the cold aisle. To do that, estimate the overall delta T (the average temperature rise of the air as it moves from the cold aisle, through the IT equipment, and out into the hot aisle). Do not rely on the data from the computer room air handling/computer room air conditioning (CRAH/CRAC) units because a poorly performing data hall will have a delta T that is too low. Poorly operating data centers typically register a delta T in the range of 5 degrees F to 10 degrees F. A good starting assumption, based on the performance of IT equipment at moderate loading levels, would be a 20 degree F delta T. (This can be adjusted at a later time.)
Now calculate the amount of air required to cool the cold aisle. Given the estimated kW of load in the rows on both sides of a cold aisle, use: air flow in cubic feet per minute (CFM) = kW load x 3413/1.085/assumed delta T.
Start with the assumption that the data hall will be cooled by perforated tiles. Using the tile manufacturer’s performance charts with the assumed underfloor pressure, determine how much air a single tile will deliver. As an example, one manufacturer’s tile will deliver 400 CFM of air at an underfloor pressure of 0.03 inches.
Based on this selection, divide the amount of air to be delivered into the cold aisle by the air flow per tile. This gives the total number of tiles required in that cold aisle. Is there not enough space in the cold aisle to place that many tiles? For low density installations (most legacy data centers are low density) this should not be an issue.
Repeat for every cold aisle in the data hall. If a single cold aisle requires more tiles than can fit in an aisle because the cold aisle is not wide enough, the process should be repeated using grates for the delivery of air. One manufacturer’s grates will deliver 1,000 CFM of air at an underfloor pressure of 0.015 inches.
Once the number of tiles (or grates) is determined by cold aisle, the placement of these tiles should not remain static. Maintaining the loads per cold aisle as IT equipment is added or removed, whether it be done in a spreadsheet or in a more sophisticated asset management tool or database, will allow the operator to change the number of tiles periodically to always provide the appropriate amount of air into the cold aisle.
The appropriate amount is just slightly more than the air flow consumed by the IT equipment. Too much more is wasteful of energy; too little will guaranty an inadequate flow of cooling which will lead to hot spots.
Once this strategy is put into operation, it may be worthwhile to observe temperatures in various sampling locations. If there are still hot spots, change the calculation to an assumed delta T of 18 degrees F, and repeat the calculation of appropriate number of tiles. If there are still hot spots, repeat with 16 degrees F, etc. Conversely, if all the samples indicate that there are no hot spots at all, it may be worth trying to raise the temperature by small increments and recalculating the number of tiles.
There is one last aspect of this strategy that still needs to be addressed — that is the total amount of air put out by the CRAH/CRAC units. If these units are simply on/off, as assumed and noted earlier, the total amount of air put into the data hall has to add up to the total amount of air the tiles put into the same space. If a single unit puts out 10,000 CFM and a single tile is selected for 400 CFM, there should be approximately 20 tiles in the data hall. If a second unit is turned on, there should be 40 tiles in the data hall. If the true load falls in between — i.e., it requires 1 1⁄2 units to be on — the fact is that there will still be 2 units on and 40 tiles should be placed on the floor. Adjustments should therefore be made by proportioning out the extra tiles to the aisles.
With a CRAH/CRAC unit that can vary its speed, the best approach is to modulate all the units in unison to maintain the fixed selected underfloor pressure — the same that was used to select the tile. Then, as loads change and tiles are added or removed based on updated calculation of loads per cold aisle, the CRAH/CRAC units respond by speeding up or slowing down.
For facility managers, the important point is to monitor loads by cold aisle and adjust the amount of floor tiles in the cold aisle to deliver the appropriate amount of cooling air. It is very plausible that many legacy data centers — usually small computer rooms — in total consume a considerable portion of the national data center energy budget.
Small measures to improve data center energy use, mainly by implementing an air flow management strategy, can have huge impacts on the total energy consumed with minimal capital investment. Additionally, such a strategy will extend the life of the data center and improve the thermal environment in which the IT equipment lives.
Vali Sorell (email@example.com) is vice president and chief mission critical mechanical engineer at Glumac. He has 35 years of design experience, the last 20 of which have been dedicated to mission critical projects.