CXL and HBM reshape memory competition in data centers

CXL is emerging as a complementary technology to HBM in Artificial Intelligence servers, promising larger memory pools, lower costs, and more flexible scaling. Samsung, SK Hynix, Micron, Intel, AMD, NVIDIA, and Google are all pushing the ecosystem toward broader deployment.

Compute Express Link, or CXL, is moving from a niche interconnect technology into a more prominent role in Artificial Intelligence infrastructure as data centers look for ways to overcome the memory wall. HBM remains critical for bandwidth-intensive model computation, but CXL is positioned to complement it by enabling larger memory capacity, external memory expansion, and shared memory pools across servers. This combination could give future Artificial Intelligence servers a split architecture in which HBM handles core computation while CXL supports massive data access at lower cost and with greater scalability.

HBM is built as a hardware architecture that boosts bandwidth through 3D-stacked DRAM placed close to processors, while CXL is an interconnect protocol designed to improve communication between CPUs and external devices. Compared to HBM, CXL offers lower transmission speed but greater flexibility in capacity. CXL can bypass server slot limits, provide terabyte-level memory capacity, and enable memory pooling among multiple processors. CXL 2.0 introduces memory pooling, allowing shared memory to be dynamically assigned to servers that need it most, which can improve utilization and reduce costs as memory prices rise. CXL also shortens data transfer paths by allowing hardware-level read and write access to pooled resources, avoiding some of the latency and software overhead associated with traditional PCIe-based communication.

Samsung’s CXL memory system “Pangea v2” has demonstrated data transfer capabilities 10.2 times higher than traditional RDMA solutions and a bottleneck reduction of up to 96%. The system uses the CXL 2.0 standard launched in 2020 and integrates 22 CXL DRAM modules into a shared memory pool, supporting multi-server access to a maximum memory capacity of 5.5TB. Samsung plans to release “Pangea v3” within 2026, based on newer specifications that are expected to add stronger optical communication support and higher single-port bandwidth. The company’s progress is presented as a benchmark for the current CXL 2.0 era rather than a permanent lead, given the pace of change in the standard.

The broader CXL ecosystem is also taking shape. This March, SK Hynix presented its CMM-DDR5 CXL memory module at the CFMS 2026 Global Flash Summit and had already introduced HMSDK software for CXL operations. Micron launched the CZ120 memory expansion module in 2023. Intel’s 5th Gen Xeon and Granite Rapids processors support CXL 2.0, with some support for CXL 3.0, while AMD’s EPYC Genoa and Turin series support CXL memory expansion. NVIDIA plans to support the CXL 3.1 standard in its Vera CPU later this year, and Google has begun deploying CXL in its data centers with controllers to manage traffic between CPUs and large external memory pools.

The shift could change how investors view memory and storage companies. Competition in HBM has focused on DRAM stacking, but CXL raises the importance of controller design, processor compatibility, latency, and management software. The technology also narrows the gap between memory and storage by making memory larger and storage faster. If CXL-equipped SSDs become mainstream, storage products may gain premium pricing similar to HBM. As CXL adoption grows, the industry could become more segmented across HBM, DDR5, and CXL expansion memory, while data center design may move from CPU-centric systems toward memory-pooling-centric architecture.

68

Impact Score

Artificial Intelligence agents face memory limits in wealth management

Citi is pushing deeper into Artificial Intelligence for wealth management with a new digital advisor, but industry executives say agent memory remains a major constraint. Better short-term and long-term recall could eventually help advisors serve more clients and maintain more continuous relationships.

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.