Nvidia is expanding its data center ambitions and partner network, unveiling next-generation platforms for large-scale Artificial Intelligence infrastructure at the Open Compute Project Global Summit in San Jose. The company’s current NVL72 server links 72 Blackwell GPUs, and its upcoming Rubin architecture will power the NVL144 system that connects 144 GPUs with 100 percent liquid cooling. Rubin can be paired with Rubin CPX, a specialized GPU that uses GDDR7 memory instead of high bandwidth memory to enable more cost-effective large-scale Artificial Intelligence inference. Nvidia also set a 2027 target for Kyber, a GPU-based Artificial Intelligence supercomputer that will combine 576 GPUs to operate as a single unified machine.
At the heart of Nvidia’s scale-up is high-speed interconnect and networking. The company’s proprietary NVLink and Spectrum-X technologies enable ultrafast communication within and across servers, a capability that has helped Nvidia maintain more than 90 percent market share against rivals such as AMD, according to the article. Even as that dominance persists, more cloud and platform companies are designing their own Artificial Intelligence chips to reduce reliance on Nvidia. On Monday, OpenAI said it is working with Broadcom on a 10-gigawatt-scale custom Artificial Intelligence chip, with OpenAI handling design and Broadcom leading development and manufacturing. The foundry has not been disclosed, though Reuters has previously reported that OpenAI is in talks with TSMC.
To keep custom silicon aligned with its ecosystem, Nvidia introduced NVLink Fusion in May, a version of NVLink that selectively opens its interconnect to third parties so their chips can integrate into Nvidia GPU-based data centers. At the summit, Nvidia added Intel and Samsung Foundry to NVLink Fusion, and said it will work with Intel on x86 CPUs using the technology. Nvidia described Samsung Foundry as a partner positioned to meet rising demand for custom CPUs and XPUs, offering design-to-manufacturing experience for custom silicon. Existing NVLink Fusion partners include CPU makers such as Fujitsu and Qualcomm and custom chip designers Broadcom and Marvell.
The Samsung partnership comes as many Artificial Intelligence companies pursue tailor-made chips and TSMC’s foundry capacity remains tight, creating an opening for Samsung to win high-profile clients. Chips built with Samsung Foundry will be able to interface with Nvidia’s platform via NVLink Fusion, keeping them within the Nvidia ecosystem. Nvidia also announced that Meta and Oracle will adopt Spectrum-X for their Artificial Intelligence data centers. CEO Jensen Huang said models are now at the scale of trillions of parameters, effectively turning data centers into gigascale Artificial Intelligence factories, and characterized Spectrum-X as the nervous system connecting millions of GPUs inside those facilities.