Huawei Ascend 910C maturation allegedly spurred NVIDIA H200 export reversal in Artificial Intelligence accelerators

U.S. officials approved exports of NVIDIA H200 data center accelerators to China after assessing competitive pressure from Huawei's Ascend 910C and its CloudMatrix 384 deployment. Bloomberg sources say concerns about rapid Chinese manufacturing growth, including plans for 600,000 units, influenced the reversal.

Earlier this week the U.S. government approved exports of NVIDIA H200 data center-grade accelerators to China, reversing a prior decision that had aimed to prevent “full fat” H200 hardware from reaching Chinese customers. Team Green engineers were reportedly preparing a heavily reduced “H20” variant to conform with sanctions, a design that first surfaced in reporting earlier in 2025. The reversal has drawn attention because it restores access to past-generation Artificial Intelligence accelerators for Chinese users.

Bloomberg sources say White House officials examined competitive dynamics with Huawei’s Ascend 910C accelerator, notably as deployed in the super node CloudMatrix 384 system. While the Ascend 910C lags the H200 on raw compute and memory bandwidth metrics cited by the sources, a large array of Ascend 910C accelerators (384 in total) remains a significant capability for data center workloads. That configuration prompted renewed scrutiny of how Chinese domestic systems might be used alongside or in place of imported NVIDIA hardware.

Unnamed industry sources and Bloomberg reporting also convey White House concern about rapid improvements in China’s semiconductor production. Industry whispers cited in the reporting suggest Huawei and foundry partners could work toward manufacturing 600,000 units throughout the coming year. The same sources said some U.S. officials considered an alternative projection that Huawei “would be capable, in 2026, of producing a few million of its Ascend 910C accelerators.” These production estimates appear to have factored into the export deliberations.

A related Reuters report referenced in the coverage says Chinese regulators are weighing measures to limit local industry’s access to NVIDIA H200 systems. Taken together, the accounts frame the export decision as shaped by both the immediate capabilities of Huawei’s Ascend 910C deployments and broader concerns about scaling domestic production of Artificial Intelligence accelerators in China.

68

Impact Score

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.