The Rise of Liquid Cooling in Data Centers
With rack densities climbing in new-generation data centers, traditional air cooling is no longer sufficient for AI workloads.
By Ronnie Wendt, Contributing Writer
With rack densities climbing in new-generation data centers, traditional air cooling is no longer sufficient for AI workloads. Because of this change, liquid cooling is rapidly becoming mainstream.
“When you get that huge amount of power into a physical space, the amount of heat that's generated is just amazing,” says Bin Lu, executive vice president of power products at Schneider Electric. “Liquid cooling is the only technology that can remove the heat efficiently.”
Mark Swift, who leads engineering and product management at Starline, adds, “For high-density AI workloads, direct-to-chip liquid cooling, immersion cooling, rear-door heat exchangers and in-row cooling are becoming essential. Modern GPUs can exceed 700 watts per chip, and AI racks can demand 50 to 100+ kW.”
Michael Giannou, global general manager of data centers with Honeywell, agrees.
“If you’re running AI applications with high-power chips, there’s really only one way to cool them — liquid cooling,” he says. “Traditional air cooling technologies cannot cool those chips.”
Still, some smaller facilities are adopting hybrid cooling architectures.
“For moderate densities, rear-door heat exchangers and contained hot- and cold-aisle configurations with in-row cooling can support 20 to 30 kilowatts per rack,” Swift says.
Ronnie Wendt is a freelance writer based in Minocqua, Wisconsin.
Related Topics: