NVIDIA has reportedly postponed the commercialization of its System on Chip Advanced Memory Module (SOCAMM), an innovative new memory form factor developed in collaboration with leading manufacturers SK Hynix, Samsung, and Micron. Initially, SOCAMM was slated for deployment in NVIDIA´s current-generation GB300 Grace Blackwell Ultra Superchip, with Micron describing its SOCAMM as a modular LPDDR5X memory solution designed to support the latest NVIDIA enterprise hardware. However, recent reports from South Korea indicate that these plans have shifted, with SOCAMM´s introduction likely postponed until the launch of the next-generation ´Rubin´ GPU architecture.
Industry sources revealed that NVIDIA communicated the delay to its major memory partners, including Samsung Electronics, SK Hynix, and Micron, with updates sent around May 14. The delay necessitates adjusted SOCAMM supply timelines. Originally, the GB300 platform was expected to adopt a new board design dubbed ´Cordelia´, which was compatible with SOCAMM modules. Citing persistent technical challenges, NVIDIA has reportedly reverted to its existing ´Bianca´ board design, which maintains support for current LPDDR memory standards rather than the advanced features of SOCAMM.
ZDNet Korea attributes the postponement to significant engineering issues. Chief among these are challenges related to the design and packaging yields of Blackwell chips, as well as reliability problems with the Cordelia substrate—such as data loss—and difficulties managing heat dissipation in the SOCAMM modules themselves. With these ongoing reliability concerns, NVIDIA now aims for the SOCAMM standard to feature prominently alongside its next major enterprise release: the Rubin Artificial Intelligence GPU family. Rubin Ultra was previewed during NVIDIA´s GTC 2025 keynote, with a projected rollout window in the second half of 2027. This strategic shift underscores the high technical bar of server-class memory integration and the growing complexity in pairing cutting-edge memory technology with Artificial Intelligence accelerator hardware.