Nvidia says it’s ‘a generation ahead’ amid early signs of Artificial Intelligence chip rivalry with Google

Nvidia responded to market concerns after its shares fell 3 per cent following a report that Meta could use Google’s tensor processing units for new data centres. The company said its technology remains 'a generation ahead' and stressed the breadth of its Artificial Intelligence platform.

Nvidia on Tuesday said its chip technology is “a generation ahead” of the industry, responding to concerns that Google’s chip push could challenge its dominance in Artificial Intelligence infrastructure. The statement on X followed a 3 per cent fall in Nvidia shares after a report that Meta could strike a deal with Google to use its tensor processing units for Meta’s new data centres. Nvidia argued that its graphics processing unit chips such as Hopper and Blackwell are more flexible than Google’s tensor processing units, and reiterated that “NVIDIA is a generation ahead of the industry” and that it supplies Google and other customers.

The article notes that Nvidia’s GPUs currently account for more than 90 per cent of the market for Artificial Intelligence chips, according to AI analysts quoted. Google’s tensor processing units are an application-specific integrated circuit or ASIC, a different chip category designed for specialised functions. Google has developed TPUs over the last decade and has offered them through Google Cloud for at least five years, while keeping most use in-house. The release of Gemini 3, trained on Google’s own TPUs, and the Meta report have heightened market scrutiny of how much in-house TPU development could alter compute choices for hyperscalers and large AI training workloads.

Beyond design, the article highlights manufacturing and ecosystem hurdles for any challenger. A security specialist quoted warned that cost, performance and access to foundry capacity at TSMC could limit new entrants, and that Nvidia’s software ecosystem including CUDA and broader platform offerings create stickiness for users. Advanced systems now deploy as many as half a dozen GPUs to every one CPU for model training, making switching complex. Market observers also noted Nvidia’s high margins, about 70 per cent, and questioned how long those could persist if competition and cost pressures evolve. The piece leaves open whether TPUs offer operational or energy advantages sufficient to displace Nvidia’s entrenched position.

55

Impact Score

Google models on Vertex Artificial Intelligence

A concise guide to Google generative Artificial Intelligence models on Vertex Artificial Intelligence, outlining featured Gemini releases, Gemma open models, image and video models, embeddings, and MedLM variants.

Samsung starts sampling 3 GB GDDR7 running at 36 Gbps

Samsung has begun sampling its fastest-ever GDDR7 memory at 36 Gbps in 24 Gb dies that translate to 3 GB per chip, and it is also mass producing 28.0 Gbps 3 GB modules reportedly aimed at a mid-cycle NVIDIA refresh.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.