China eyes chip-stacking to narrow gap with NVIDIA

Wei Shaojun said China could narrow its technology gap with NVIDIA by stacking 14 nm logic chips with 18 nm DRAM and new compute architectures. The approach is aimed at improving Artificial Intelligence performance and energy efficiency while relying on a fully domestic supply chain.

Wei Shaojun, vice president of the China Semiconductor Industry Association, outlined a route for Chinese chipmakers to close the technology gap with NVIDIA by combining mature logic nodes with denser memory and new compute architectures. He said 14 nm logic chips, which lag the 4 nm-class silicon used in NVIDIA’s current Artificial Intelligence GPUs, could approach comparable performance when paired with 18 nm DRAM and linked using 3D hybrid bonding. That near-memory computing architecture places compute elements directly beside memory to reduce data movement and improve energy efficiency.

The proposal echoes Huawei’s recent “stacking and clustering” strategy and is positioned as a workaround to US export controls that restrict access to advanced nodes such as 5 nm and below. Wei emphasized the design would rely on a fully domestic supply chain. He noted that both 14 nm logic and 18 nm DRAM are covered by current US export restrictions, which limit access to foreign production. Wei also described the broader market issue as a “triple dependence” on NVIDIA hardware, CUDA software, and prevailing models and architectures, underscoring the ecosystem-level challenge for China to build parallel tooling and platforms.

Chinese alternatives to NVIDIA hardware are beginning to appear, according to the remarks. Wei highlighted emerging GPU and accelerator efforts, including Zhonghao Xinying, a startup founded by Google engineer Yanggong Yifa, which claims its custom ASIC TPU accelerator can reach ‘up to 1.5x’ the compute performance of NVIDIA’s older A100 GPU. The argument from proponents of stacking is that performance can scale by increasing chip integration density rather than relying solely on cutting-edge lithography, offering a path to close gaps in Artificial Intelligence compute capability within the limits of current supply chains and export controls.

65

Impact Score

Microsoft previews Shader Model 6.10 for gpu Artificial Intelligence engines

Microsoft has introduced Shader Model 6.10 in AgilitySDK 1.720-preview with a new matrix API designed to unify access to dedicated gpu Artificial Intelligence hardware from AMD, Intel, and NVIDIA. The change is aimed at making neural rendering features easier to deploy across multiple vendors with a single programming model.

Europe’s Artificial Intelligence challenge is structural dependence

Europe has talent, research strength, and rising investment in Artificial Intelligence, but startups remain reliant on American infrastructure, platforms, and late-stage capital. The argument centers on digital sovereignty, interoperability, and ownership as the conditions for building durable European champions.

Community backlash slows Artificial Intelligence data center expansion

Political resistance, regulatory scrutiny, and rising energy and water concerns are complicating the build-out of large Artificial Intelligence data centers across the United States. The pressure is increasing costs, delaying projects, and adding fresh risks to the economics behind Generative Artificial Intelligence infrastructure.

House panel advances export controls after China report

The House Foreign Affairs Committee moved export control legislation after a House Select Committee report detailed China’s use of illegal means to build its Artificial Intelligence and semiconductor sectors. The measure is aimed at chip smuggling and Artificial Intelligence model theft.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.