Nvidia expands into data center and consumer cpu markets

Nvidia is deepening its presence in data centers and edging into consumer PCs by pushing its Grace cpu-only servers and reportedly preparing laptop chips, directly challenging Intel and AMD on their core territory.

Nvidia has signed an expanded, multiyear data center agreement with Meta that will provide the social media company with millions of Blackwell and Rubin GPUs, while also marking the first large-scale deployment of Nvidia Grace CPU-only servers in Meta’s data centers. Grace is the processor that Nvidia pairs with two Blackwell or two Blackwell Ultra GPUs to form its GB200 and GB300 Artificial Intelligence superchips, but Meta’s deployment highlights a new standalone role for the cpu. The Grace-only servers arrive as hyperscalers increasingly use traditional cpus to support some Artificial Intelligence inferencing and agentic Artificial Intelligence applications, giving Nvidia a broader footprint in data center infrastructure.

The strategy challenges Intel, which has long dominated data center cpus, and Advanced Micro Devices, which has been trying to take share from Intel. Analysts note that Nvidia has steadily expanded its data center presence, adding networking through the Mellanox acquisition in 2020 so it now sells almost a vast majority of the value going into modern server builds, and that adding cpu capacity further increases its share of spending. Nvidia’s cpu push is not a retreat from its core gpu franchise or a signal that the Artificial Intelligence gpu market is weakening, but a move to capture demand from smaller Artificial Intelligence models that can be efficiently powered by cpus. Cpus also represent a key bottleneck in the Artificial Intelligence supply chain, and by offering its own, Nvidia aims to keep its overall system sales flowing rather than being constrained by third-party processor shortages.

Competition around cpus is intensifying as major cloud providers such as Amazon, Google, and Microsoft build their own Arm-based processors, including Amazon’s Graviton, Google’s Axion, and Microsoft’s Cobalt, in contrast to the x86 architecture used by Intel and AMD. Intel recently reported that it is unable to meet cpu demand, with one analyst noting that Intel did not have the capacity despite owning chip plants and had been selling equipment for pennies on the dollar two quarters ago, leaving it flat-footed as demand surged. At the same time, Nvidia is collaborating with Intel on special servers that combine Nvidia’s gpus with Intel’s cpus, highlighting a mix of competition and partnership. Beyond data centers, Nvidia is also reportedly preparing cpus for consumer laptops, with online leaks indicating Lenovo is developing six laptops using Nvidia’s N1 and N1X processors. Entering laptops would open a new, although less lucrative, market compared with Nvidia’s data center business, which brought in 51.2 billion in its third quarter alone, and could appeal to gamers already familiar with Nvidia’s graphics cards. Nvidia’s widening cpu effort positions the company as a more direct rival to Intel and AMD across both server and pc markets.

68

Impact Score

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.