NVIDIA Blackwell Boosts Water Efficiency with Liquid-Cooled AI Infrastructure

NVIDIA´s Blackwell platform introduces liquid cooling to dramatically enhance water and energy efficiency in Artificial Intelligence data centers.

As Artificial Intelligence models grow in complexity and computational demands, traditional air-cooled systems in data centers are struggling to keep up with rising power densities and heat dissipation challenges. While legacy facilities operated at around 20 kW per rack, modern hyperscale data centers can now support over 135 kW per rack, highlighting the urgent need for new solutions to manage escalating energy requirements and cooling costs in Artificial Intelligence workloads.

Liquid cooling has emerged as a key strategy to address these challenges, offering significant improvements in heat rejection and energy efficiency. By reducing reliance on mechanical chillers, liquid-cooled systems not only lower operational costs but also enable greater scalability and performance for high-density server racks. This shift is particularly vital for data centers powering next-generation Artificial Intelligence models, where energy and heat loads routinely exceed the capabilities of conventional cooling methods.

NVIDIA´s latest rack-scale offerings, the GB200 NVL72 and GB300 NVL72, exemplify this new approach. These systems are designed specifically for the demanding inference tasks required by trillion-parameter large language models, integrating advanced liquid cooling to maintain peak server performance and water efficiency. Their architecture is optimized for both scaling accuracy during model testing and operational sustainability, positioning the Blackwell platform as a frontrunner in future-ready, environmentally conscious Artificial Intelligence infrastructure.

71

Impact Score

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Patch notes detail split compute and IO tiles in Intel Diamond Rapids Xeon 7

Linux kernel patch notes reveal that Intel’s upcoming Diamond Rapids Xeon 7 server processors separate compute and IO tiles and adopt new performance monitoring and PCIe 6.0 support. The changes point to a more modular architecture and a streamlined product stack focused on 16-channel memory configurations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.