Nvidia GB200 artificial intelligence servers reportedly smuggled into China despite export controls

Nvidia’s high-end artificial intelligence servers have surfaced in China’s black markets, highlighting loopholes in US export controls.

Nvidia´s flagship artificial intelligence servers, including the advanced GB200 Grace Blackwell systems, are reportedly appearing in Chinese markets despite strict US export restrictions. The Financial Times uncovered multiple sales contracts and filings indicating a robust black market for these servers. Distributors exploit loopholes and grey channels, some involving intermediary hubs like Singapore, to supply high-end Nvidia hardware to various provinces in China, such as Anhui for models like the B200, H100, and H200.

The US government, particularly under the Trump administration, has intensified efforts to curb the flow of American artificial intelligence hardware into China, citing concerns about national security and technological competition. Despite these measures, Nvidia equipment valued at over a billion dollars has managed to circumvent export rules. Many devices are relabeled under brands like Supermicro (SMCI) to further mask their origin, and listings for such hardware are openly found on popular Chinese retail platforms. Some vendors have even gone as far as offering live demonstrations of working hardware to assure buyers of authenticity and capability.

While the volume of GB200 clusters sold through these backchannels remains minor compared to the world’s largest artificial intelligence clusters, the hardware is more than adequate for the needs of low- and mid-tier Chinese cloud service providers. The persistent availability of these restricted systems underscores ongoing vulnerabilities in supply chain enforcement. The US response—whether in further patching export loopholes or tightening scrutiny over distributors linked to suspected rerouting—remains to be seen as China continues to procure high-performance artificial intelligence compute resources through unconventional avenues.

72

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.