NVIDIA Blackwell Liquid Cooling Delivers 300x Water Efficiency for Data Centers

NVIDIA´s Blackwell platform introduces liquid cooling, slashing water usage and energy costs for Artificial Intelligence data centers by over 300 times compared to legacy air-cooled systems.

As Artificial Intelligence (AI) models continue to grow in complexity and scale, data centers face escalating challenges in heat management, with traditional air cooling proving increasingly inadequate and energy-intensive. Conventional facilities, once operating at 20 kW per rack, now support more than 135 kW per rack, making efficient cooling a critical concern for operational stability and cost control. The industry is shifting towards liquid cooling, a solution that directly captures and removes heat from the source, circumventing the inefficiency of circulating chilled air and greatly improving scalability and energy efficiency in next-generation AI infrastructure.

NVIDIA´s GB200 NVL72 and GB300 NVL72 systems, both built on the Blackwell platform, exemplify this trend by employing rack-scale, liquid-cooled architectures specifically designed for the demands of trillion-parameter language model inference and AI reasoning workloads. These platforms deliver substantial improvements over legacy systems: the GB200 NVL72 achieves 40 times the compute density, 30 times higher throughput, 25 times the energy efficiency, and over 300 times better water efficiency compared to traditional air-cooled servers. The GB300 NVL72 further increases these metrics, while effectively reducing the need for mechanical chillers—a historical culprit for up to 40% of a data center´s electricity use—thereby achieving significant operational cost savings and environmental benefits.

The transition to liquid cooling enables diverse cooling strategies tailored for evolving data center requirements, with options including mechanical chillers, evaporative cooling, dry coolers, and pumped refrigerant systems. Liquid cooling not only reduces reliance on energy- and water-intensive cooling systems but also supports higher operating water temperatures, enhancing flexibility in various climates while minimizing ecological impact. Innovations from industry leaders such as Vertiv, Schneider Electric, CoolIT Systems, and Boyd have resulted in improved energy consumption, increased rack density, and enhanced system reliability for high-performance AI workloads.

Looking toward the future, NVIDIA is spearheading sustainability initiatives like the COOLERCHIPS program, aiming to develop modular data centers with advanced cooling solutions capable of reducing cooling-related costs and environmental footprints even further. As AI continues to surge in computational requirements, the widespread adoption of liquid cooling and high-density architectures is emerging as an essential strategy for building sustainable, efficient, and future-proof AI data centers, ultimately supporting ongoing advancements in artificial intelligence and high-performance computing.

75

Impact Score

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Patch notes detail split compute and IO tiles in Intel Diamond Rapids Xeon 7

Linux kernel patch notes reveal that Intel’s upcoming Diamond Rapids Xeon 7 server processors separate compute and IO tiles and adopt new performance monitoring and PCIe 6.0 support. The changes point to a more modular architecture and a streamlined product stack focused on 16-channel memory configurations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.