fnPrime


Energy Efficiency in Data Center Design and Operations



Cloud computing, IoT and generative AI searches are putting pressure on owners and managers to ensure energy efficiency over the life of data centers.


By Dan Hounsell, Senior Editor?  


Cloud computing. The Internet of Things. Remote connectivity. Generative artificial intelligence (AI) searches. The work of processors in data centers makes all these things happen. 

The proliferation of smart, connected devices throughout institutional and commercial facilities also is driving the development of new data centers and updates to existing facilities and increasing data center capacity, all of which requires lots of energy. 

How can data centers use energy more efficiently? The answer lies partly in integrating new systems and capabilities with a focus on better managing energy use. Whatever the answers, it is an increasingly challenging responsibility for data center owners and facilities managers. 

“Operating a data center efficiently is one of the toughest challenges in the industry because it involves every system at once: IT, power, cooling and airflow,” says Chad Ludwig, senior director of client operations with McKinstry, an engineering and energy management firm. 

Design decisions 

The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) sets the benchmark for data center cooling and energy efficiency. ASHRAE data center design guidelines promote efficient cooling methods, including hot aisle/cold aisle configurations and liquid cooling technologies. Managers need to understand the requirements and ensure their facilities comply. 

“Trying to meet ASHRAE 90.4 and being compliant with that in an existing data center can be pretty challenging because depending upon which code is referenced and which jurisdiction that’s referenced in, there are various levels of requirements inside of ASHRAE 90.4 for the minimum energy standards for data centers,” says Matt Koukl, principal with Affiliated Engineers Inc., a consulting engineering firm, adding that data centers can require tens to hundreds of thousands of cubic feet per minute of air entering the building and getting exhausted out. “That can really create some serious challenges relative to how to do that.” 

Compliance issues for data center design require owners and managers to address a series of requirements that intersect with one another in ways that create complex challenges. Designing energy-efficient data centers requires balancing flexibility, cost, reliability and, now, emerging technologies like AI and open compute platforms. 

“AI workloads, especially those powered by high-performance processors such as GPUs and TPUs, generate far greater power density than traditional systems, often reaching 30-100 kilowatts per rack,” Ludwig says, referring to graphic processing units and tensor processing units. “That pushes designers to move beyond air cooling and consider liquid or immersion cooling methods. These options improve efficiency but require new commissioning practices, leak prevention strategies and specialized maintenance skills.” 

Data center design considerations also need to address energy efficiency related to future expansions of an organization’s computing needs. 

“These modular systems increase density and improve performance but can create thermal hot spots or airflow imbalances if facilities are not designed to match their layout and power needs,” he says. “Power infrastructure must also evolve. Transformers, (uninterruptible power supply) units and power distribution systems must handle heavier, more concentrated loads while minimizing energy loss.” 

Taken together, these emerging and evolving issues in data center design mean that owners and managers no longer can afford to plan for just the near term when it comes to energy efficiency. 

“The convergence of AI and open compute means designers can no longer focus on individual systems,” Ludwig says. “True efficiency depends on how power delivery, cooling and modular design work together. The most advanced data centers now use real-time monitoring, dynamic airflow management and scalable liquid cooling to stay efficient as AI workloads grow without sacrificing reliability or performance.” 

Eye on operations 

Energy efficiency challenges for data center owners and managers extend well beyond the design phase to encompass increasingly complex day-to-day operations.  

“On the IT side, servers and accelerators evolve constantly, and workloads fluctuate throughout the day,” Ludwig says. “Many facilities run underused equipment or legacy systems that cannot scale power use dynamically. That makes it hard to cut energy waste without hurting performance. Optimizing efficiency at the workload level requires visibility into where and how energy is used, along with software tools that can adjust performance in real time.” 

Upgrades to power infrastructures create additional challenges. 

“Every link in the electrical chain adds small energy losses that accumulate over time,” he says. “Replacing or reconfiguring major components while keeping systems online is costly and risky. Even small imbalances in load distribution can reduce efficiency across the facility.” 

Airflow management to keep data centers operating smoothly also requires that managers and technicians give careful consideration to facility and system components, as well as changes in workloads. 

“Containment systems, aisle separation and sealing are critical, yet even a misplaced floor tile or unsealed cable opening can create hot spots,” Ludwig says. “As workloads shift, these thermal conditions change, too, requiring constant monitoring and adjustment.” 

These shifting demands are driving developments in an additional type of data center cooling. 

“We’re now starting to see a lot of these enterprises now taking in these systems that are requiring liquid cooling,” Koukl says. “Liquid cooling comes in many different forms, but the typical form that we’re seeing is direct-to-chip liquid cooling. Instead of a heat exchanger that has a whole bunch of fins on it and you move air across it, now there’s an enclosed heat exchanger that sits on top of a chip that has micro channels in it with water going in and coming back out of it to cool that chip. 

“As organizations decide that they want to have some form of artificial intelligence computing in their systems and in their facility, those systems use so much more power.” 

Technology advances and growing demand are combining to create data centers that require new types of training and skills for technicians to ensure they can inspect, test and maintain systems effectively. 

“We’re seeing higher and higher density equipment, meaning more power utilization,” Koukl says. “The power architectures and the power delivered to these cabinets now is so much greater than it has ever been, and that requires people that have a keen understanding of how to operate that power, how to deploy that power, and how to provide maintenance to that now large kit of power that is being deployed to support these higher-density loads.”  

Liquid-cooled systems in particular are changing the way managers and operators approach their responsibilities. 

“It used to be that a data center manager or data center operator who knew how to put in servers and rack and stack IT equipment now needs to learn plumbing and how water works, and they need a chemical engineer that understands water treatment and water quality,” Koukl says. “The data center for some operators is becoming infinitely more complex around the operation side of it because you now have to understand water chemistry, water quality, plumbing, piping systems, etc.” 

The pace of changes impacting energy efficiency in data center design and operations is not likely to slow anytime soon, meaning owners and managers will benefit from taking a long-term view of the challenges ahead. 

“Energy efficiency is not a one-time goal,” Ludwig says. “It is a long-term strategy that evolves with technology. As AI and high-density computing reshape demand, data centers must be designed for adaptability. That means building in flexibility to integrate new technologies, from advanced liquid cooling to heat recovery and grid-interactive systems.” 

Dan Hounsell is senior editor for the facilities market. He has more than 30 years of experience writing about facilities maintenance, engineering and management. 




Contact FacilitiesNet Editorial Staff »

  posted on 11/7/2025   Article Use Policy




Related Topics: