China tries its hand at advanced artificial intelligence chips without Nvidia

Cut off from Nvidia´s most advanced GPUs by export controls, China is pushing Alibaba, Huawei and local startups to produce domestic artificial intelligence chips and compatible software stacks.

Vendor lock-in shapes the current market for artificial intelligence chips. Nvidia dominates with graphical processing units that are seen as the most powerful and scalable for training large language models, reinforced by an extensive software ecosystem. Competitors have focused on inferencing workloads, which are less demanding than training and less dependent on Nvidia´s software stack. Companies such as Groq and Cerebras and a range of startups target that space, while in China the push centers on Alibaba and Huawei alongside smaller players like MetaX and Cambricon.

Export restrictions have limited Chinese access to Nvidia´s latest generations. The article notes that Blackwell chips, Nvidia´s generation beyond Hopper, are not destined for China, and that H20 hardware is the weaker, China-available sibling to H100 models. The Chinese government has told domestic firms to stop ordering from Nvidia and instead rely on locally produced chips. A major development cited is Alibaba making its chips compatible with Nvidia´s software, a breakthrough reported by the Wall Street Journal that reduces friction for developers. The piece also highlights DeepSeek´s experience: DeepSeek-R1 achieved high training efficiency by optimizing on lower-level Nvidia programming (PTX), and DeepSeek-R2 was reportedly delayed when attempting to use Chinese chips for training.

Timeframes and manufacturing constraints are central to the challenge. The chip industry typically moves in multi-year cycles, while Nvidia has accelerated generation turnover to roughly yearly releases. To compete, China must match or exceed that pace or benefit from a slowdown at Nvidia. Further limits come from restricted access to ASML´s advanced High-NA EUV lithography equipment, which the article identifies as a likely ceiling on how fast Chinese processors can improve. Absent major changes such as talent movement, intellectual property shifts, or unforeseen breakthroughs, the article cautions that rapid progress in Chinese artificial intelligence chips should be viewed with caution.

70

Impact Score

Apertus: Swiss teams release fully open multilingual large language model

EPFL, ETH Zurich and the Swiss National Supercomputing Centre have released Apertus, a fully open multilingual large language model with its architecture, weights and training recipes published. The model is intended to support research, commercial adoption and public oversight of Artificial Intelligence.

Marvell extends CXL ecosystem leadership with Structera interoperability across major memory and CPU platforms

Marvell announced its Structera Compute Express Link memory-expansion controllers and near memory compute accelerators passed interoperability testing with DDR4 and DDR5 from Micron Technology, Samsung Electronics, and SK hynix. The company says this makes Structera the only CXL 2.0 product family validated across both major CPU architectures and all three memory suppliers.

32 GB of RAM could become the new standard for gamers

Steam´s hardware survey shows 32 GB of RAM rose to 36.46% of surveyed systems in August, up 1.31 percentage points from July and closing on 16 GB at 41.88%. Cheaper DDR5, broader OEM memory options, and local Artificial Intelligence and streaming workloads are cited as drivers.

Artificial intelligence sharpens humidity maps to improve forecasts

Researchers at Wrocław University of Environmental and Life Sciences used a super-resolution approach powered by Artificial Intelligence and NVIDIA GPUs to turn low-resolution GNSS snapshots into high-resolution 3D humidity maps, cutting retrieval errors in test regions.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.