NVIDIA Delays SOCAMM Memory Standard, Targeting Launch with ´Rubin´ Artificial Intelligence GPUs

NVIDIA´s System on Chip Advanced Memory Module (SOCAMM) rollout is now expected to coincide with its next-generation ´Rubin´ Artificial Intelligence GPU architecture, following delays due to engineering hurdles.

NVIDIA has reportedly postponed the commercialization of its System on Chip Advanced Memory Module (SOCAMM), an innovative new memory form factor developed in collaboration with leading manufacturers SK Hynix, Samsung, and Micron. Initially, SOCAMM was slated for deployment in NVIDIA´s current-generation GB300 Grace Blackwell Ultra Superchip, with Micron describing its SOCAMM as a modular LPDDR5X memory solution designed to support the latest NVIDIA enterprise hardware. However, recent reports from South Korea indicate that these plans have shifted, with SOCAMM´s introduction likely postponed until the launch of the next-generation ´Rubin´ GPU architecture.

Industry sources revealed that NVIDIA communicated the delay to its major memory partners, including Samsung Electronics, SK Hynix, and Micron, with updates sent around May 14. The delay necessitates adjusted SOCAMM supply timelines. Originally, the GB300 platform was expected to adopt a new board design dubbed ´Cordelia´, which was compatible with SOCAMM modules. Citing persistent technical challenges, NVIDIA has reportedly reverted to its existing ´Bianca´ board design, which maintains support for current LPDDR memory standards rather than the advanced features of SOCAMM.

ZDNet Korea attributes the postponement to significant engineering issues. Chief among these are challenges related to the design and packaging yields of Blackwell chips, as well as reliability problems with the Cordelia substrate—such as data loss—and difficulties managing heat dissipation in the SOCAMM modules themselves. With these ongoing reliability concerns, NVIDIA now aims for the SOCAMM standard to feature prominently alongside its next major enterprise release: the Rubin Artificial Intelligence GPU family. Rubin Ultra was previewed during NVIDIA´s GTC 2025 keynote, with a projected rollout window in the second half of 2027. This strategic shift underscores the high technical bar of server-class memory integration and the growing complexity in pairing cutting-edge memory technology with Artificial Intelligence accelerator hardware.

65

Impact Score

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.