Micron integrates high-capacity HBM3E memory into AMD Instinct MI350 Series

Micron´s latest HBM3E memory powers AMD´s advanced Instinct MI350 Series, optimizing Artificial Intelligence and high-performance computing workloads.

Micron Technology has announced that its advanced HBM3E 36 GB 12-high memory will be integrated into AMD´s upcoming Instinct MI350 Series GPU platforms. The move highlights the ongoing importance of both power efficiency and high performance in the fields of Artificial Intelligence model training and high-performance computing. Micron underscores this as a key achievement, adding to its leadership in high-bandwidth memory while further strengthening partnerships with industry leaders like AMD.

The new Micron HBM3E memory expansion delivers top-tier bandwidth and reduced power consumption, directly enabling AMD´s CDNA 4 architecture-based Instinct MI350 Series GPUs to reach new heights in data throughput. With an impressive 288 GB of HBM3E per GPU and total system configurations supporting up to 2.3 TB, the platform is capable of delivering up to 8 TB/s bandwidth and theoretical performance up to 161 PFLOPS at FP4 precision. These specifications equip a single GPU with the ability to process Artificial Intelligence models containing as many as 520 billion parameters, significantly advancing what can be accomplished on a single chip within modern data centers.

This integration of Micron´s memory technology with AMD´s architecture sets a new benchmark for energy-efficient, high-density computing. The synergy allows for faster training and inference of large language models as well as more efficient scientific simulations and complex data processing workloads. Both companies emphasize how this collaboration not only maximizes compute performance per watt but also accelerates time-to-market for next-generation Artificial Intelligence solutions, empowering organizations to address increasing demand without compromising on scalability or operational efficiency.

75

Impact Score

Semiconductor coverage tracks geopolitics, telecom chips and Artificial Intelligence demand

Light Reading’s semiconductor section brings together coverage of geopolitical risks in chip supply, telecom silicon shakeups and surging Artificial Intelligence infrastructure demand, with a strong focus on how these forces reshape vendors such as Intel, Nvidia, Qualcomm, Samsung and Nokia. The stream highlights how shifts in rare earths policy, network silicon strategy and massive memory orders are redefining the broader communications and computing ecosystem.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.