NVIDIA GB300 Blackwell Ultra NVL72 liquid cooling costs nearly $50,000

NVIDIA's GB300 NVL72 Oberon racks use liquid cooling to handle extreme thermal loads from 72 Blackwell Ultra GPUs and 32 Grace CPUs. A Morgan Stanley model estimates cooling hardware costs for the 72-GPU configuration at nearly $50,000.

NVIDIA positions the GB300 NVL72 as its top Blackwell Ultra server configuration, built around Grace Blackwell Ultra superchips and quoted as delivering exascale-class dense FP4 performance and improved throughput-per-megawatt versus prior HGX platforms. The rack’s performance comes with substantial thermal demand, so NVIDIA specifies liquid cooling for the Oberon rack, and industry modeling highlights very large cooling component costs, cited around $50,000 for the full 72-GPU configuration.

Morgan Stanley’s valuation model underpins the cost figures cited. Each Blackwell Ultra chip is rated at 1,400 W thermal design power, which rules out traditional air cooling for the 72-GPU arrangement. The model calculates that 72 such GPUs generate more than 100 kW from the GPUs alone. When adding the 32 Grace CPUs that accompany the GPUs in the rack, peak thermal output exceeds 100 kW overall and requires dense, liquid-based thermal management across multiple trays within the rack.

The bank models the Oberon rack as containing 18 compute trays and nine switch trays. Each compute tray consumes roughly 6.6 kW but requires cooling for about 6.2 kW. Morgan Stanley values the cooling components for a single compute tray at about $2,260, which the firm multiplies to roughly $40,680 for all compute trays in the rack. Switch-tray cooling is estimated at about $1,020 per tray, or $9,180 total. High-performance cold plates are highlighted as the most expensive bill-of-materials items, with per-unit prices in the low hundreds of dollars.

Those component-level valuations illustrate how thermal management becomes a major part of system cost and design for exascale-class dense configurations. The combination of very high per-chip TDPs, many GPUs and CPUs in a single rack, and the need for liquid cold plates and tray-level plumbing drives both capital and engineering requirements for deployment in data centers.

50

Impact Score

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.