Vendor lock-in shapes the current market for artificial intelligence chips. Nvidia dominates with graphical processing units that are seen as the most powerful and scalable for training large language models, reinforced by an extensive software ecosystem. Competitors have focused on inferencing workloads, which are less demanding than training and less dependent on Nvidia´s software stack. Companies such as Groq and Cerebras and a range of startups target that space, while in China the push centers on Alibaba and Huawei alongside smaller players like MetaX and Cambricon.
Export restrictions have limited Chinese access to Nvidia´s latest generations. The article notes that Blackwell chips, Nvidia´s generation beyond Hopper, are not destined for China, and that H20 hardware is the weaker, China-available sibling to H100 models. The Chinese government has told domestic firms to stop ordering from Nvidia and instead rely on locally produced chips. A major development cited is Alibaba making its chips compatible with Nvidia´s software, a breakthrough reported by the Wall Street Journal that reduces friction for developers. The piece also highlights DeepSeek´s experience: DeepSeek-R1 achieved high training efficiency by optimizing on lower-level Nvidia programming (PTX), and DeepSeek-R2 was reportedly delayed when attempting to use Chinese chips for training.
Timeframes and manufacturing constraints are central to the challenge. The chip industry typically moves in multi-year cycles, while Nvidia has accelerated generation turnover to roughly yearly releases. To compete, China must match or exceed that pace or benefit from a slowdown at Nvidia. Further limits come from restricted access to ASML´s advanced High-NA EUV lithography equipment, which the article identifies as a likely ceiling on how fast Chinese processors can improve. Absent major changes such as talent movement, intellectual property shifts, or unforeseen breakthroughs, the article cautions that rapid progress in Chinese artificial intelligence chips should be viewed with caution.