As Artificial Intelligence models grow in complexity and computational demands, traditional air-cooled systems in data centers are struggling to keep up with rising power densities and heat dissipation challenges. While legacy facilities operated at around 20 kW per rack, modern hyperscale data centers can now support over 135 kW per rack, highlighting the urgent need for new solutions to manage escalating energy requirements and cooling costs in Artificial Intelligence workloads.
Liquid cooling has emerged as a key strategy to address these challenges, offering significant improvements in heat rejection and energy efficiency. By reducing reliance on mechanical chillers, liquid-cooled systems not only lower operational costs but also enable greater scalability and performance for high-density server racks. This shift is particularly vital for data centers powering next-generation Artificial Intelligence models, where energy and heat loads routinely exceed the capabilities of conventional cooling methods.
NVIDIA´s latest rack-scale offerings, the GB200 NVL72 and GB300 NVL72, exemplify this new approach. These systems are designed specifically for the demanding inference tasks required by trillion-parameter large language models, integrating advanced liquid cooling to maintain peak server performance and water efficiency. Their architecture is optimized for both scaling accuracy during model testing and operational sustainability, positioning the Blackwell platform as a frontrunner in future-ready, environmentally conscious Artificial Intelligence infrastructure.