What Meta’s purchase of Rivos says about RISC-V

Meta has acquired Rivos, a startup that taped out a CUDA-compatible RISC-V processor, signaling that Nvidia’s grip on Artificial Intelligence infrastructure may face new pressure from open architectures.

Meta’s acquisition of Rivos highlights a strategic shift in the race to supply compute for Artificial Intelligence at hyperscale. Founded in 2021 by Walden Catalyst Ventures, Santa Clara-based Rivos set out to build a system-on-chip that could drop into cloud data centers without forcing software rewrites. The startup taped out a CUDA-compatible RISC-V design and emphasized a “recompile not redesign” approach to ease adoption for developers already invested in Nvidia’s software stack. Walden Catalyst said Meta’s offer was chosen after multiple acquisition approaches, validating both the technology and the company’s vision.

Rivos’ first design used a chiplet architecture that pairs a 64-bit RVA23 RISC-V CPU running at 3.1 GHz with a Rivos-designed SIMT GPGPU, all tied together by a unified memory subsystem combining on-chip HBM3e with DDR5 RDIMMs. The SoC included an integrated Ultra Ethernet NIC for high-speed connectivity. In a white paper, the company argued this tightly coupled architecture improves scalability and energy efficiency for training, inference, and reasoning by minimizing external data movement and avoiding the common practice of adding GPUs merely to compensate for memory limits. Rivos also worked on a PCIe-based accelerator.

The deal gives Meta an immediate boost for its in-house silicon roadmap. The company had already been working with Rivos on its Meta Training and Inference Accelerator, including the MTIA 1i and MTIA 2i chips aimed at speeding Artificial Intelligence inference. Even so, Meta remains one of Nvidia’s largest customers. Mark Zuckerberg said Meta’s Nvidia GPU fleet was about 750,000 in 2024 and would reach around 1.3 million by the end of 2025, powering initiatives such as the Hyperion Artificial Intelligence supercluster in Louisiana and the Prometheus multi-gigawatt data center expected in 2026.

Nvidia continues to dominate Artificial Intelligence training and inference with Hopper, Blackwell, and the H200, but supply constraints and evolving workload needs are opening the door to alternatives. Nvidia’s latest strategy pairs CPUs with two GPUs in superchips, using Arm for Grace-Hopper and Grace-Blackwell and striking a deal with Intel that positions x86 CPUs alongside Nvidia GPUs over NVLink. Nvidia cited the maturity of the x86 ecosystem as a key reason for that choice. Meta’s Rivos move, however, underscores that the CPU market is not confined to x86 and Arm, and that RISC-V could play a larger role for hyperscalers seeking adaptable, cost-effective architectures.

RISC-V originated 15 years ago at UC Berkeley’s Par Lab as an open instruction set architecture, and today the project counts more than 4,500 members in 70 countries, according to the RISC-V organization. While RISC-V has gained traction in China, adoption in the United States has lagged, something that could shift as companies prioritize open standards for massive inference buildouts. Notably, Nvidia announced in July that CUDA would support RISC-V, a move that both validates RISC-V momentum for Artificial Intelligence workloads and narrows the software advantage Rivos sought to exploit. For startups and hyperscalers alike, Meta’s acquisition is likely to be read as a green light for betting on RISC-V in next-generation data center silicon.

64

Impact Score

Sora’s staying power, costs, and legal risks

OpenAI’s Sora is surging with a feed of exclusively Artificial Intelligence-generated clips, but key questions loom over engagement, costs, and legality. Here is what we know about its staying power, the energy and monetization math, and the copyright and cameo risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.