India makes its bets as Artificial Intelligence chip race hots up; startup costs may fall

India is expanding computing capacity under its IndiaAI Mission while rising competition between Google and Nvidia in Artificial Intelligence hardware could reduce chip costs for domestic startups.

Global competition between Google and Nvidia has shifted conversations about Artificial Intelligence hardware but so far has limited direct impact on India’s roadmap. Late-November reports that Meta Platforms may use Google’s in-house Tensor Processing Units knocked down Nvidia’s stock nearly 3 per cent, and Google’s recent Gemini 3, described as a GPT5-beating model, was built entirely on TPUs. Industry analysts in the article note that while Google’s TPUs now support large-scale training as well as inferencing, Nvidia responded by stressing its chips remain “a generation ahead of the industry.”

For India the immediate priority is building foundational models that reflect the country’s linguistic and cultural complexity, and that effort continues to rely on chips from Nvidia, AMD and Intel. Under the government’s IndiaAI Mission, launched in March 2024 with a budget of ₹10,372 crore, the country has expanded its shared infrastructure to 34,333 GPUs, almost twice the count from the initial August 2024 tender. That common cloud base offers training and inference capacity for startups and enterprises. Yotta has deployed much of a first tranche of 8,000 GPUs to AI startups building sovereign LLMs and has ordered a second tranche of 8,000 GPUs that should be in use by December or early next year; the article reports the company is investing an additional NULL to buy 8,000 more GPUs as part of the mission.

Analysts quoted say competition will be positive for India because a more crowded market should lower prices and expand choices. Nvidia has had 80 per cent plus of the market, and its Blackwell chips are described as four times more powerful than the H100s, but TPUs and other accelerators excel at inferencing and may offer large-scale cost and power advantages. One analyst warns that once boards focus on cost, a TPU or cloud-specific accelerator that can deliver the same outcome at thirty to fifty percent lower cost will gain traction for inference workloads, even if training continues to lean on Nvidia. The article highlights that lower chip prices and more diverse hardware options could ease compute costs for domestic Artificial Intelligence startups and buyers participating in the IndiaAI Mission.

56

Impact Score

Samsung to supply half of NVIDIA’s SOCAMM2 modules in 2026

Hankyng reports Samsung Electronics has secured a deal to supply half of NVIDIA’s SOCAMM2 modules in 2026 for the Vera Rubin Superchip, which pairs two ‘Rubin’ Artificial Intelligence GPUs with one ‘Vera’ CPU and moves from hardwired memory to DDR5 SOCAMM2 modules.

NVIDIA announces CUDA Tile in CUDA 13.1

CUDA 13.1 introduces CUDA Tile, a virtual instruction set for tile-based parallel programming that raises the programming abstraction above SIMT and abstracts tensor cores to support current and future tensor core architectures. The change targets workloads including Artificial Intelligence where tensors are a fundamental data type.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.