Nvidia launches Vera Arm CPU as standalone rival to Xeon and Epyc

Nvidia is taking its Vera Arm server CPU standalone, positioning it directly against Intel Xeon and AMD Epyc while tightly coupling with its Rubin GPUs through high bandwidth interconnects.

Nvidia is extending its Artificial Intelligence systems strategy beyond GPUs by introducing its high performance Vera Arm CPUs as a standalone product, marking its first direct entry as a competitor to Intel Xeon and AMD Epyc server grade processors. In an interview with Bloomberg, Nvidia chief executive Jensen Huang said the company will, for the very first time, offer Vera CPUs as an independent part of the infrastructure, enabling customers to run their computing stack on both Nvidia GPUs and Nvidia CPUs. Huang described Vera as completely revolutionary and suggested that cloud partners such as Coreweave will have to move quickly if they want to be first to deploy the new chip, adding that Nvidia has not yet announced any CPU design wins but expects there will be many.

The Vera CPU is equipped with 88 custom Armv9.2 Olympus cores that utilize Spatial Multithreading technology, allowing it to handle 176 threads through physical resource partitioning. These custom cores support native FP8 processing, enabling some Artificial Intelligence workloads to be executed directly on the CPU with 6×128-bit SVE2 implementation. The chip offers 1.2 TB/s of memory bandwidth and supports up to 1.5 TB of LPDDR5X memory, which positions it for memory intensive computing tasks that traditionally lean on high bandwidth and large capacity memory subsystems. With Vera now offered as a standalone solution, the article notes an open question over whether more traditional memory options such as DDR5 RDIMMs will be available, or if the platform will rely solely on SOCAMM LPDDR5X.

To keep all cores fed and coherent, Nvidia has implemented a second generation Scalable Coherency Fabric that provides 3.4 TB/s of bisection bandwidth, connecting the cores across a unified monolithic die and eliminating the latency issues common in chiplet architectures. Additionally, Nvidia has integrated a second generation NVLink Chip-to-Chip technology, delivering up to 1.8 TB/s of coherent bandwidth for external Rubin GPUs, which reinforces Nvidia’s strategy of pairing its CPUs closely with its own accelerators. Together, the core design, memory architecture, and high speed interconnects suggest that Vera is engineered as a tightly coupled compute platform for data center and Artificial Intelligence workloads rather than a generic server CPU alone.

70

Impact Score

Nvidia vera cpus hit by pcie flaw with third party gpus

Nvidia’s vera server cpus include a pcie controller bug that can break compatibility with non-nvidia gpus and accelerators, particularly under dma heavy workloads. The flaw is tied to how the chips generate memory addresses during specific mmio write operations on arm based systems.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.