- Data from Lennox Data Centre Solutions shows: a dedicated AI rack could consume up to 1MW by 2030, a level previously reserved for an entire data center. Meanwhile, the average rack is only gradually increasing to 30-50kW.
- This means an AI rack will consume 20-30 times more energy than a conventional rack, placing enormous pressure on power supply and cooling systems.
- Ted Pulfer, Director at Lennox, emphasizes: cooling has become central to the industry, no longer merely auxiliary infrastructure. Methods such as liquid cooling are considered a strategic priority.
- In fact, the industry is collaborating more closely than ever: manufacturers, engineers, and customers are testing new solutions both in the lab and in real-world deployments to tackle the thermal management challenge.
- New trend: replacing traditional low-voltage alternating current (AC) with high-voltage direct current (HVDC) (+/-400V), which helps reduce power loss and cable cross-section. Central Distribution Units (CDUs) coordinate the flow of liquid to the racks, which is then routed to cold plates mounted directly on the hottest components.
- Microsoft is experimenting with microfluidics – etching microscopic channels on the back of chips for coolant to flow directly over the silicon. Results: 3 times higher cooling efficiency compared to cold plates, reducing GPU temperature increase by 65%. Combined with AI monitoring of hot spots, coolant distribution becomes even more precise.
- Although hyperscalers (like Microsoft, Google, Amazon) lead the way, Pulfer suggests there is still opportunity for smaller operators because the market is rapidly changing, and large orders cause supply chain bottlenecks, opening the door for more agile players.
- The focus of the digital infrastructure industry has shifted: it is no longer just computing performance but the ability to remove heat for sustainable operation.
📌 Forecasts suggest that by 2030, each AI rack could reach 1MW, 20-30 times higher than a conventional rack (30-50kW). The biggest challenge is not just computational power but energy and thermal management. New technologies like microfluidics help increase efficiency by 3 times, reducing GPU temperature by 65%. The digital infrastructure race now hinges on the ability to distribute power and remove heat, which will determine the future of global data centers.
