High-density computing workloads, such as those needed for AI training and inference, are generating heat levels that traditional air cooling methods can no longer manage effectively. As a result, there is a growing shift towards liquid cooling technologies, even within data centers that typically rely on air cooling.
Recent statistics show that average power densities in data centers have more than doubled in just two years, increasing from 8 kilowatts to 17 kilowatts per rack. Projections indicate that these figures could continue to surge, potentially reaching as high as 30 kilowatts by 2027 due to the escalating demands of AI workloads. Some individual racks are now even consuming more than 80 kilowatts, with Nvidia’s latest server configurations requiring densities up to 120 kilowatts.
A recent survey by the Uptime Institute highlighted that as of early 2024, about 22% of data center operators are already implementing direct liquid cooling systems. This transition is often occurring in hybrid setups where approximately only 10% of racks utilize liquid cooling.
The increasing adoption of liquid cooling is primarily driven by the exponential rise in AI workloads. The Dell’Oro Group has revised its market forecasts upwards, predicting that the data center infrastructure segment will grow annually by 14% to reach $61 billion by 2029, partly fueled by higher liquid cooling adoption rates.
The current trend indicates that while most data centers are still air-cooled, liquid cooling is rapidly gaining traction. Lenovo’s Scott Tease pointed out that, despite air cooling being a familiar technology, liquid cooling is expanding at an impressive rate, potentially becoming one of the fastest-growing sectors of the data center market.
Research by JLL reveals that air cooling operates efficiently up to about 20 kilowatts. Beyond this threshold, the shift towards liquid cooling approaches such as active rear-door heat exchangers is favored. At higher power densities, like 100 kilowatts and above, direct-to-chip liquid cooling stands out as the most effective solution. In extreme cases over 175 kilowatts, immersion cooling becomes relevant.
Liquid cooling systems are increasingly considered the standard in new data center constructions, but the overall capacity is struggling to meet rising global demand, which McKinsey estimates could grow from 60 gigawatts to as much as 300 gigawatts by 2030. The growing demand for liquid-cooling-ready space is leaving enterprises scrambling for capacity, leading to record-low vacancy rates in colocation facilities in North America.
Upgrading existing air-cooled data centers to support liquid cooling poses engineering challenges, but many see it as a more environmentally responsible option than starting anew. Experts like Josh Claman from Accelsius emphasize that many carbon emissions come from the initial construction of data centers, making retrofitting a more sustainable choice.
Hybrid systems that incorporate both liquid and air cooling can lead to energy savings, as liquid cooling enables higher temperatures in air cooling setups, reducing overall energy consumption. The consistent operating temperatures of liquid cooling systems can mitigate wear on IT equipment, and their design often involves fewer moving parts, cutting maintenance needs.
There’s also potential for creative ways to utilize the hot liquid expelled from data centers. Gerald Kleyn from Hewlett Packard Enterprise mentioned that this heat could be repurposed for heating adjacent buildings, fostering further energy efficiency.
For older or existing data centers lacking water infrastructure, companies can consider liquid cooling via self-contained units that do not require extensive plumbing. These systems can still generate energy savings of around 15%.
As companies adapt to the needs of high-density AI servers, mastering liquid cooling has become increasingly critical, and those unable to retrofit or build out new facilities may struggle to keep pace with rapidly evolving technological demands.