Arm joins Nvidia NVLink Fusion ecosystem

Arm has joined Nvidia's NVLink Fusion ecosystem, enabling Arm licensees and Arm-designed server CPUs to connect natively to Nvidia GPUs and Artificial Intelligence accelerators. The collaboration could strengthen Arm's appeal to hyperscalers and expand Nvidia's ecosystem while putting pressure on rivals such as AMD and Intel.

Arm and Nvidia announced at the Supercomputing ’25 conference that Arm has joined the NVLink Fusion ecosystem. The move makes NVLink Fusion supported by multiple microarchitecture and CPU developers, and it creates a pathway for Arm licensees to integrate NVLink IP into custom CPU system-on-chips so those CPUs can connect directly to Nvidia GPUs and Artificial Intelligence accelerators. Dion Harris, head of data center product marketing at Nvidia, said, “Arm is integrating NVLink IP so that their customers can build their CPU SoCs to connect Nvidia GPUs,” and noted that NVLink Fusion can reduce design complexity and development costs for hyperscalers.

For Arm, the change has different benefits across its businesses. As an intellectual property provider, Arm can now offer licensees a ready-made route to build CPUs that plug natively into Nvidia’s Artificial Intelligence accelerator ecosystem, rather than rely on PCIe alone. That can make Arm-based designs more attractive to hyperscalers and sovereign cloud builders who want custom CPUs compatible with market-leading Nvidia GPUs. The article notes that previously Nvidia’s Grace CPUs were the primary processors compatible with Nvidia GPUs for NVLink connectivity. Arm also gains the potential to develop its own server CPUs that can compete inside Nvidia-based systems alongside Nvidia’s Grace and Vera and Intel Xeon, though Nvidia’s policies and product timelines will affect how that competition unfolds.

The addition of Arm expands the pool of CPUs that can serve natively in Nvidia-centric Artificial Intelligence systems without Nvidia having to build all those CPUs itself. Arm licensees such as Google, Meta, and Microsoft could integrate NVLink directly into their SoCs, increasing options for specialized semi-custom infrastructure and strengthening Nvidia’s position in sovereign Artificial Intelligence projects. The report concludes that Arm’s inclusion is broadly beneficial for Arm, Nvidia, and their partners, while posing competitive risks for AMD, Intel, and Broadcom, even as chip development cycles may narrow those advantages over time.

58

Impact Score

The human engine of artificial intelligence: datawork and LLM performance

Adriana Alvarado of IBM argues that datawork – the human processes of collecting, curating and documenting datasets – is the core driver of artificial intelligence performance and fairness. Her talk explores why Large Language Models amplify the stakes of these human choices and how synthetic data and robust governance fit into the solution.

The business side of artificial intelligence content creation

Generative artificial intelligence is shifting marketing relationships from vendors to partners and elevating contracts as strategic assets. New deal structures, from archive-driven fan remixes to omniverse production and feed-level integration, are reshaping how brands create and monetise content.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.