Nvidia’s next generation Vera Rubin Artificial Intelligence systems are scheduled for late summer shipping in the form of VR200 NVL72 rack-scale solutions that are designed to power the next wave of Artificial Intelligence models. The memory configuration for these systems has been narrowed to two high bandwidth memory suppliers, with Micron reportedly excluded from HBM4 design wins. According to leaked institutional notes from SemiAnalysis, SK Hynix will represent about 70% of the HBM4 supply for VR200 NVL72 systems, with Samsung getting the remaining 30% of the supply.
For a major memory maker like Micron, there is reportedly zero commitment for the supply of HBM4 memory in these Vera Rubin platforms. Instead of contributing HBM4, Micron will participate in the configuration by supplying LPDDR5X memory for Vera CPUs, which can be equipped with up to 1.5 TB of LPDDR5X. That LPDDR5X footprint is positioned to help Micron make up for the lost share associated with HBM4, keeping the company present in Nvidia’s broader Vera Rubin ecosystem despite missing the flagship high bandwidth memory slot.
The shift in supplier mix appears linked to Nvidia’s aggressive system-level memory upgrade for VR200 NVL72. The platform went from the initial system target of 13 TB/s in March 2025 to 20.5 TB/s in September as Nvidia raised its bandwidth expectations. However, at CES 2026, Nvidia confirmed that the VR200 NVL72 system is now operating at 22 TB/s of bandwidth, marking a nearly 70% increase in system bandwidth, all derived from aggressive memory specification scaling that the company demanded from the memory makers. These heightened requirements likely influenced which HBM4 vendors could qualify, while also reinforcing the strategic importance of complementary memory technologies like LPDDR5X in balancing performance and capacity across the full Vera Rubin stack.
