Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is "totally false," even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

Nvidia has denied a report that it is preparing a custom version of Groq inferencing hardware for China. In an update dated 3/19/2026 4:50pm PT, Nvidia CEO Jensen Huang said the Reuters story about Groq chips being prepared for shipment to China was “totally false.” The claim came after broader reporting that Beijing had approved Nvidia’s H200 last-generation GPUs for sale in the region following months of talks involving the U.S. government, Nvidia, and China.

Nvidia is nevertheless moving to revive H200 sales into China. Huang said earlier this week that Nvidia had received licenses to supply “many customers ​in China” and had received orders from a number of companies. To meet that demand, Nvidia was restarting the H200 production line, with Huang saying that the “supply chain is getting fired up.” H200 may be older hardware, but demand remains high because it is positioned as a much stronger training option than the H20 products previously sold into China. Nvidia reportedly didn’t include the potential revenue from selling H200 to China in its suggested $1 trillion revenue plan for the company in 2027.

Any rebound in China comes with significant constraints. Although the Trump administration has approved some sales of H200 chips to China, it comes with a 25% revenue share with the U.S. government. Nvidia will have to pay the fee when the chips arrive in the U.S. from their fabrication facilities for approval, before being re-exported. China has also been cautious about allowing Nvidia hardware to dominate its domestic market, even as local companies continue to seek high-performance systems for large language model training and deployment.

The contested Groq angle centered on Nvidia’s effort to strengthen its inferencing position. Groq, described here as a provider of custom inferencing hardware known as Language Processing Units, became tied to Nvidia through a late-2025 licensing and hiring deal. The company had made a $14 billion deal with Groq, and the technology was featured as part of Nvidia’s Vera Rubin platform at GTC 2026. Reuters had reported that Nvidia was adapting Groq LPUs for China and that the products were targeting a May release, but Huang’s denial directly challenges that account.

Inferencing remains a more competitive field than training, particularly in China. Chinese companies including Baidu and Huawei are developing their own inferencing chips and have received substantial backing to accelerate that work. Global rivals are also active, with Meta developing MTIA inferencing chips and Amazon and Google advancing their own custom TPU efforts. Even without the reported Groq push, Nvidia appears determined to rebuild from the “0%” China market share figure Huang cited last fall, using renewed H200 availability as its immediate path back into the market.

58

Impact Score

Chancellor sets principles for UK-EU alignment

Rachel Reeves has outlined a growth plan built around closer UK-EU ties, faster Artificial Intelligence adoption, and stronger regional development. The strategy sets new principles for regulatory alignment, expands support for innovation, and shifts more investment power to city regions.

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

MIT method spots overconfident Artificial Intelligence models

MIT researchers developed a way to detect when large language models are confidently wrong by comparing their answers with outputs from similar models. The combined uncertainty measure outperformed standard techniques across a range of tasks and may help reduce unreliable responses.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.