Compute Express Link, or CXL, is moving from a niche interconnect technology into a more prominent role in Artificial Intelligence infrastructure as data centers look for ways to overcome the memory wall. HBM remains critical for bandwidth-intensive model computation, but CXL is positioned to complement it by enabling larger memory capacity, external memory expansion, and shared memory pools across servers. This combination could give future Artificial Intelligence servers a split architecture in which HBM handles core computation while CXL supports massive data access at lower cost and with greater scalability.
HBM is built as a hardware architecture that boosts bandwidth through 3D-stacked DRAM placed close to processors, while CXL is an interconnect protocol designed to improve communication between CPUs and external devices. Compared to HBM, CXL offers lower transmission speed but greater flexibility in capacity. CXL can bypass server slot limits, provide terabyte-level memory capacity, and enable memory pooling among multiple processors. CXL 2.0 introduces memory pooling, allowing shared memory to be dynamically assigned to servers that need it most, which can improve utilization and reduce costs as memory prices rise. CXL also shortens data transfer paths by allowing hardware-level read and write access to pooled resources, avoiding some of the latency and software overhead associated with traditional PCIe-based communication.
Samsung’s CXL memory system “Pangea v2” has demonstrated data transfer capabilities 10.2 times higher than traditional RDMA solutions and a bottleneck reduction of up to 96%. The system uses the CXL 2.0 standard launched in 2020 and integrates 22 CXL DRAM modules into a shared memory pool, supporting multi-server access to a maximum memory capacity of 5.5TB. Samsung plans to release “Pangea v3” within 2026, based on newer specifications that are expected to add stronger optical communication support and higher single-port bandwidth. The company’s progress is presented as a benchmark for the current CXL 2.0 era rather than a permanent lead, given the pace of change in the standard.
The broader CXL ecosystem is also taking shape. This March, SK Hynix presented its CMM-DDR5 CXL memory module at the CFMS 2026 Global Flash Summit and had already introduced HMSDK software for CXL operations. Micron launched the CZ120 memory expansion module in 2023. Intel’s 5th Gen Xeon and Granite Rapids processors support CXL 2.0, with some support for CXL 3.0, while AMD’s EPYC Genoa and Turin series support CXL memory expansion. NVIDIA plans to support the CXL 3.1 standard in its Vera CPU later this year, and Google has begun deploying CXL in its data centers with controllers to manage traffic between CPUs and large external memory pools.
The shift could change how investors view memory and storage companies. Competition in HBM has focused on DRAM stacking, but CXL raises the importance of controller design, processor compatibility, latency, and management software. The technology also narrows the gap between memory and storage by making memory larger and storage faster. If CXL-equipped SSDs become mainstream, storage products may gain premium pricing similar to HBM. As CXL adoption grows, the industry could become more segmented across HBM, DDR5, and CXL expansion memory, while data center design may move from CPU-centric systems toward memory-pooling-centric architecture.
