Samsung to supply half of NVIDIA’s SOCAMM2 modules in 2026

Hankyng reports Samsung Electronics has secured a deal to supply half of NVIDIA's SOCAMM2 modules in 2026 for the Vera Rubin Superchip, which pairs two 'Rubin' Artificial Intelligence GPUs with one 'Vera' CPU and moves from hardwired memory to DDR5 SOCAMM2 modules.

Korean tech publication Hankyng reports that Samsung Electronics has secured a deal to supply half of NVIDIA’s SOCAMM2 modules in 2026. NVIDIA designed its upcoming ‘Vera Rubin’ Superchip module to support DDR5 SOCAMM2 modules. The ‘Vera Rubin’ design pairs two ‘Rubin’ Artificial Intelligence GPUs and one ‘Vera’ CPU on a single Superchip module, and the move to detachable DDR5 SOCAMM2 modules marks a shift from the current hardwired memory approach.

SOCAMM2, or small-outline CAMM2, is described as a variant of DDR5 CAMM2 modules with slimmer outlines intended to further minimize PCB footprint. On NVIDIA’s roadmap, these DDR5 SOCAMM2 modules will replace the hardwired LPDDR5X ECC memory used on current ‘Grace Blackwell’ Superchip modules. The article highlights a specific architecture change: replacing 512-bit LPDDR5X on Grace with 8-channel (16 sub-channel) DDR5 on Vera, a configuration that the report says enables higher densities, upgradeability, and higher clock speeds relative to the existing design.

The deal assigns Samsung to provide roughly half of NVIDIA’s SOCAMM2 supply for 2026, reflecting a component-level shift in NVIDIA’s Superchip strategy. By moving to modular DDR5 SOCAMM2 parts instead of integrated LPDDR5X ECC, NVIDIA can pursue denser memory configurations and field upgrade paths for its Superchip modules. The report frames the change as both a technical migration toward DDR5-based, module-swappable memory and a supply agreement that positions Samsung as a major supplier for NVIDIA’s 2026 Vera Rubin deployments.

55

Impact Score

NVIDIA announces CUDA Tile in CUDA 13.1

CUDA 13.1 introduces CUDA Tile, a virtual instruction set for tile-based parallel programming that raises the programming abstraction above SIMT and abstracts tensor cores to support current and future tensor core architectures. The change targets workloads including Artificial Intelligence where tensors are a fundamental data type.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.