Micron integrates high-capacity HBM3E memory into AMD Instinct MI350 Series

Micron´s latest HBM3E memory powers AMD´s advanced Instinct MI350 Series, optimizing Artificial Intelligence and high-performance computing workloads.

Micron Technology has announced that its advanced HBM3E 36 GB 12-high memory will be integrated into AMD´s upcoming Instinct MI350 Series GPU platforms. The move highlights the ongoing importance of both power efficiency and high performance in the fields of Artificial Intelligence model training and high-performance computing. Micron underscores this as a key achievement, adding to its leadership in high-bandwidth memory while further strengthening partnerships with industry leaders like AMD.

The new Micron HBM3E memory expansion delivers top-tier bandwidth and reduced power consumption, directly enabling AMD´s CDNA 4 architecture-based Instinct MI350 Series GPUs to reach new heights in data throughput. With an impressive 288 GB of HBM3E per GPU and total system configurations supporting up to 2.3 TB, the platform is capable of delivering up to 8 TB/s bandwidth and theoretical performance up to 161 PFLOPS at FP4 precision. These specifications equip a single GPU with the ability to process Artificial Intelligence models containing as many as 520 billion parameters, significantly advancing what can be accomplished on a single chip within modern data centers.

This integration of Micron´s memory technology with AMD´s architecture sets a new benchmark for energy-efficient, high-density computing. The synergy allows for faster training and inference of large language models as well as more efficient scientific simulations and complex data processing workloads. Both companies emphasize how this collaboration not only maximizes compute performance per watt but also accelerates time-to-market for next-generation Artificial Intelligence solutions, empowering organizations to address increasing demand without compromising on scalability or operational efficiency.

75

Impact Score

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.