CXMT ramps up HBM3 mass production to support Chinese artificial intelligence chips

ChangXin Memory Technologies is moving into mass production of HBM3 modules, signaling a major expansion of China’s domestic high-bandwidth memory supply for artificial intelligence accelerators and mainstream computing partners.

ChangXin Memory Technologies is initiating mass manufacturing of HBM3 modules as it works to close the gap with established South Korean memory suppliers that began full fourth-generation high-bandwidth memory production in 2023. Over the past year, the company has highlighted progress across commercial memory lines, including proprietary DDR5 and LPDDR5X designs that appeared in preview form in November. By early February 2026, major pc brands such as ASUS, Acer, Dell, and HP were reported to be evaluating CXMT consumer-grade memory products as alternative supply sources.

Industry sources first pointed to CXMT’s move into homegrown HBM3 module development in May, describing it as part of a broader diversification of the firm’s manufacturing footprint. By early autumn, a joint venture involving YMTC was viewed as a catalyst for expanding China’s overall HBM manufacturing capabilities. This build-out aligns with efforts by Chinese chipmakers to reduce reliance on foreign memory vendors in the face of ongoing export controls that limit access to advanced Micron, Samsung, and SK Hynix components.

Huawei’s latest Ascend artificial intelligence accelerator is reported to depend on in-house HBM technology, and a source cited by MK South Korea claims Huawei is co-developing HBM with CXMT. According to this account, the collaboration is expected to reach mass production despite concerns about low yields. Further information indicates that CXMT is planning a per month output of 60,000 wafers for HBM3 production. This figure allegedly represents about 20% of the company’s total manufacturing capacity of 300,000 wafers per month for 2026, signaling a significant allocation of resources to high-bandwidth memory intended to support domestic artificial intelligence and high-performance computing demand.

65

Impact Score

AMD Medusa Halo APU set to pair Zen 6 and RDNA 5 with LPDDR6 memory

AMD’s upcoming Medusa Halo Ryzen Artificial Intelligence MAX APU is rumored to combine up to 24 Zen 6 cores and 48 RDNA 5 compute units with a wide LPDDR6 interface. Early LPDDR6 modules from Samsung and Innosilicon highlight the bandwidth and efficiency gains this platform could tap.

Cisco launches Silicon One G300 to scale gigawatt artificial intelligence data centres

Cisco has introduced the Silicon One G300 chip to power gigawatt-scale artificial intelligence data centres, promising 28% faster job completion for agentic workloads and tighter integration of networking, cooling and management. The platform targets hyperscalers and enterprises looking to maximise GPU utilisation as artificial intelligence clusters expand.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.