Nvidia’s Vera Arm server chip targets data centers, not gaming PCs

Nvidia is pushing beyond its graphics dominance with Vera, a custom Arm-based server CPU debuting in a major cloud deal, but the chip is not headed for gaming PCs yet.

Nvidia is extending its dominance beyond graphics processors into central processors with Vera, a new Arm-based CPU built on custom-designed cores. The company has announced its first customer for Vera and stressed that this is the first time it has supplied a CPU as a standalone product, marking a strategic move into a market long controlled by established server players. The launch underlines Nvidia’s ambition to become a full-stack computing provider rather than simply a graphics specialist.

The first big deployment of Vera comes through a deal with cloud services company CoreWeave, which Bloomberg describes as the effective commercial launch of the chip. In what the article calls a typically circular arrangement within the Artificial Intelligence ecosystem, Nvidia is investing 2 billion in Coreweave, while the cloud company will purchase up to 6 billion in Nvidia hardware, including Vera CPUs. Bloomberg says the Vera CPU will be competing with server chips from Intel and AMD, along with other cloud computing processors such as Amazon’s Graviton, positioning it squarely in the data center rather than the consumer PC arena.

Technical details on Vera are still sparse, but Nvidia has said the chip uses a new custom Arm core design called Olympus. Known specifications include that Vera has 88 cores, is rated at 50 W and offers double the performance of Nvidia’s previous Arm CPU, known as Grace. Grace has 72 cores, so Vera is getting double the performance with a relatively small increase in core count from 72 to 88. 50 W is also a remarkably low power rating for an 88-core CPU. While Grace relies on Arm’s Neoverse V2 cores licensed directly from Arm, Vera’s Olympus cores are Nvidia’s own design, and Nvidia’s roadmap shows Vera with its new Olympus cores as the basis of its CPU products for several years to come.

For PC enthusiasts, the key question is whether Vera and its custom cores will show up in consumer systems. The article says that in the short term, the answer to that seems to be a firm “no,” because Nvidia’s planned Arm CPU for PCs, codenamed N1X, will be based on the GB10 “Superchip” in the DGX Spark mini Artificial Intelligence computer and will not use Vera or Olympus. Instead, N1X will use off-the-shelf Arm-designed Cortex-X925 and Cortex-A725 cores. The piece notes that a rumoured follow-up called Nvidia N2 might be a candidate for custom Arm cores, but frames that as speculative. More broadly, it argues that even if Nvidia delivers extremely powerful Arm hardware for PCs, software support and, in particular, game compatibility will be a major hurdle. The story concludes that there is still substantial work ahead for Arm on the PC, even if a company with Nvidia’s resources is willing to take on the challenge.

65

Impact Score

How global R&D spending growth has shifted since 2000

Global research and development spending has nearly tripled since 2000, with China and a group of emerging economies driving the fastest growth. Slower but still substantial expansion in mature economies highlights a world that is becoming more research intensive overall.

Finance artificial intelligence compliance in European financial services

The article explains how financial firms can use artificial intelligence tools while meeting European, United Kingdom, Irish and United States regulatory expectations, focusing on risk, transparency and governance. It details the European Union artificial intelligence act, the role of cybersecurity, and the standards and practices that support compliant deployment across the financial sector.

Artificial intelligence becomes a lever for transformation in Africa

African researchers and institutions are positioning artificial intelligence as a tool to tackle structural challenges in health, education, agriculture and governance, while pushing for data sovereignty and local language inclusion. The continent faces hurdles around skills, infrastructure and control of data but is exploring frugal technological models tailored to its realities.

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.