Micron Technology has announced that its advanced HBM3E 36 GB 12-high memory will be integrated into AMD´s upcoming Instinct MI350 Series GPU platforms. The move highlights the ongoing importance of both power efficiency and high performance in the fields of Artificial Intelligence model training and high-performance computing. Micron underscores this as a key achievement, adding to its leadership in high-bandwidth memory while further strengthening partnerships with industry leaders like AMD.
The new Micron HBM3E memory expansion delivers top-tier bandwidth and reduced power consumption, directly enabling AMD´s CDNA 4 architecture-based Instinct MI350 Series GPUs to reach new heights in data throughput. With an impressive 288 GB of HBM3E per GPU and total system configurations supporting up to 2.3 TB, the platform is capable of delivering up to 8 TB/s bandwidth and theoretical performance up to 161 PFLOPS at FP4 precision. These specifications equip a single GPU with the ability to process Artificial Intelligence models containing as many as 520 billion parameters, significantly advancing what can be accomplished on a single chip within modern data centers.
This integration of Micron´s memory technology with AMD´s architecture sets a new benchmark for energy-efficient, high-density computing. The synergy allows for faster training and inference of large language models as well as more efficient scientific simulations and complex data processing workloads. Both companies emphasize how this collaboration not only maximizes compute performance per watt but also accelerates time-to-market for next-generation Artificial Intelligence solutions, empowering organizations to address increasing demand without compromising on scalability or operational efficiency.