Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is "totally false," even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

Nvidia has denied a report that it is preparing a custom version of Groq inferencing hardware for China. In an update dated 3/19/2026 4:50pm PT, Nvidia CEO Jensen Huang said the Reuters story about Groq chips being prepared for shipment to China was “totally false.” The claim came after broader reporting that Beijing had approved Nvidia’s H200 last-generation GPUs for sale in the region following months of talks involving the U.S. government, Nvidia, and China.

Nvidia is nevertheless moving to revive H200 sales into China. Huang said earlier this week that Nvidia had received licenses to supply “many customers ​in China” and had received orders from a number of companies. To meet that demand, Nvidia was restarting the H200 production line, with Huang saying that the “supply chain is getting fired up.” H200 may be older hardware, but demand remains high because it is positioned as a much stronger training option than the H20 products previously sold into China. Nvidia reportedly didn’t include the potential revenue from selling H200 to China in its suggested $1 trillion revenue plan for the company in 2027.

Any rebound in China comes with significant constraints. Although the Trump administration has approved some sales of H200 chips to China, it comes with a 25% revenue share with the U.S. government. Nvidia will have to pay the fee when the chips arrive in the U.S. from their fabrication facilities for approval, before being re-exported. China has also been cautious about allowing Nvidia hardware to dominate its domestic market, even as local companies continue to seek high-performance systems for large language model training and deployment.

The contested Groq angle centered on Nvidia’s effort to strengthen its inferencing position. Groq, described here as a provider of custom inferencing hardware known as Language Processing Units, became tied to Nvidia through a late-2025 licensing and hiring deal. The company had made a $14 billion deal with Groq, and the technology was featured as part of Nvidia’s Vera Rubin platform at GTC 2026. Reuters had reported that Nvidia was adapting Groq LPUs for China and that the products were targeting a May release, but Huang’s denial directly challenges that account.

Inferencing remains a more competitive field than training, particularly in China. Chinese companies including Baidu and Huawei are developing their own inferencing chips and have received substantial backing to accelerate that work. Global rivals are also active, with Meta developing MTIA inferencing chips and Amazon and Google advancing their own custom TPU efforts. Even without the reported Groq push, Nvidia appears determined to rebuild from the “0%” China market share figure Huang cited last fall, using renewed H200 availability as its immediate path back into the market.

58

Impact Score

Study finds widespread weaknesses in autonomous agents

A multi-institution study found that autonomous agents across several sectors are highly exposed to tool-chaining, goal drift, and memory poisoning attacks. The findings suggest agentic systems face broader and deeper security risks than stateless large language models.

Federal safety net unprepared for Artificial Intelligence job losses

Economists are warning that the federal system designed to support displaced workers is not equipped for a wave of job losses tied to Artificial Intelligence. Existing unemployment benefits and retraining programs are widely seen as too limited to manage broad disruption.

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.