Nvidia’s networking stack has moved to the center of its platform strategy, with NVLink emerging as the defining advantage for scaling GPU clusters. The company’s latest step, NVLink Fusion, lands as UALink – an open spec backed by AMD, Intel, Broadcom, Google, and others – tries to counter that edge. But UALink is moving slowly, with version 1.0 targets only just set and hardware not expected until next year, while Nvidia is already shipping its next generation.
At its keynote, Nvidia announced two moves with far-reaching implications: licensing C2C (chip-to-chip) interconnect technology and selling pre-verified NVLink chiplet I/O dies. C2C licensing opens Nvidia’s short-reach die-to-die PHY and protocol to third parties, with CPUs called out as an early focus, including work with Fujitsu. That approach fits high performance computing needs that favor tight CPU-GPU coupling in a one-to-one configuration reminiscent of Grace Hopper more than Grace Blackwell, enabling custom ARM CPUs to sit beside tightly linked GPUs for CPU-heavy scientific and engineering workloads.
The calculus behind these choices is pragmatic. Building a custom CPU is relatively accessible via ARM CSS, and front-end accelerator design can be achieved by modest teams with standard EDA tools. The real stumbling blocks are networking I/O and the scale-up domain connecting accelerators coherently. Those are extremely hard, and today NVLink is the only proven option at scale. By licensing C2C, Nvidia accelerates heterogeneous adoption where differentiation is lower. By selling NVLink chiplets rather than licensing the technology, it keeps the crown jewels in-house while making it easier for others to build around Nvidia’s fabric.
Strategically, this sets up an embrace, extend, extinguish dynamic against UALink. The open consortium faces a crisis of the commons as members push competing priorities, and its 128G launch timing leaves it trailing the current pace of accelerator deployment. Nvidia can seed the market with NVLink chiplets as a Trojan horse: once rivals adopt them to ship working systems sooner, Nvidia can move its roadmap faster than an open spec can converge, deepening customer reliance and visibility into competitive designs. In a market hungry for alternatives, many will still conclude it is Nvidia’s solution or none, reinforcing lock-in across Artificial Intelligence and HPC infrastructure.