Nvidia opens NVLink to Arm Neoverse CPUs for Artificial Intelligence servers

Nvidia is extending NVLink Fusion to Arm-based Neoverse CPUs so licensees can integrate NVLink IP into custom CPU SoCs. The change targets Artificial Intelligence-focused servers and lets hyperscalers pair custom Arm CPUs with Nvidia GPUs without requiring Nvidia CPUs.

Nvidia has announced that Arm-based Neoverse CPUs can integrate with its NVLink Fusion technology, enabling direct and efficient communication between Arm licensee CPUs and Nvidia GPUs. The move opens NVLink beyond Nvidia’s own proprietary CPUs and the Intel and AMD server configurations that previously dominated NVLink deployments. According to the announcement at the Supercomputing ’25 conference, Arm confirmed custom Neoverse designs will include a protocol to enable seamless data transfer with Nvidia accelerators, helping to eliminate PCIe bottlenecks in Artificial Intelligence-focused server deployments.

Arm licensees can now build CPU system-on-chips that include NVLink IP natively, allowing customer systems to pair multiple GPUs with a single Arm CPU for Artificial Intelligence workloads. Hyperscalers such as Microsoft, Amazon, and Google can immediately adopt custom Arm CPUs paired with Nvidia GPUs in workstations and AI servers, giving them more control over infrastructure and potential operational cost savings. Nvidia’s existing Grace Blackwell platform, which already pairs multiple GPUs with an Arm-based CPU, is cited as one example of Arm and Nvidia integration, but NVLink Fusion support means Nvidia GPUs can be used without requiring Nvidia CPUs in other server configurations.

The development has broader ecosystem and competitive implications. NVLink Fusion expansion increases the range of CPUs that can be used in Nvidia-centric Artificial Intelligence systems and enables future Arm-based designs to compete directly with Nvidia’s Grace and Vera processors as well as Intel Xeon CPUs in GPU-led configurations. The announcement may reduce the appeal of alternative interconnects or competing accelerators, though the article notes that chip development cycles could affect actual adoption timing. The change also impacts sovereign Artificial Intelligence projects, where governments or cloud providers may prefer Arm CPUs for control-plane tasks while still leveraging Nvidia GPUs for accelerator functions. Softbank’s backing of OpenAI’s Stargate project, which plans to use both Arm and Nvidia chips, is mentioned as an example of this multi-vendor approach.

65

Impact Score

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.