Micron has unveiled early engineering samples of its next-generation HBM4 memory modules, marking a notable milestone in high-bandwidth memory technology. The HBM4 architecture stacks twelve dynamic random-access memory (DRAM) dies in a 12-Hi layout, resulting in 36 GB of storage per package. By doubling the data interface width to 2,048 bits per stack, each HBM4 module delivers sustained memory bandwidth of 2 TB/s—an efficiency jump of about 20% compared to the current HBM3E standard. The modules utilize Micron´s mature 1β process node for DRAM tiles, with a shift to EUV-enabled 1γ technology on the horizon for future DDR5 products.
Initial samples are being shipped to key Micron partners within weeks, with the company aiming to begin full-scale HBM4 production in early 2026. Major chipmakers NVIDIA and AMD are poised to be among the earliest adopters. NVIDIA intends to deploy HBM4 in its forthcoming Rubin-Vera artificial intelligence accelerators, projected for release in the second half of 2026. AMD, for its part, plans to equip its upcoming Instinct MI400 series with HBM4, with additional details anticipated at the firm´s Advancing AI 2025 event. The substantial increase in capacity and bandwidth of HBM4 modules specifically targets the surging requirements of generative artificial intelligence, high-performance computing, and a broad array of data-intensive workloads.
The expanded stack heights and broader interfaces of HBM4 facilitate greater data throughput, a key advantage for multi-chip packages and memory-coherent interconnects found in modern artificial intelligence and scientific computing architectures. However, mass production presents challenges, particularly in managing thermal performance and validating real-world benchmarks. The industry will watch closely as Micron´s HBM4 transitions from engineered samples to commercial deployment, as benchmark results and thermal solutions will ultimately define its role in powering the next generation of artificial intelligence systems and advanced computing applications.