AMD Unveils Instinct MI450X IF128 Rack-Scale GPU Cluster with 128 Units for Artificial Intelligence

AMD´s forthcoming Instinct MI450X IF128 rack-scale system, set for 2026, is designed to redefine high-density computing for Artificial Intelligence workloads with 128 GPUs and breakthrough bandwidth and performance.

AMD is gearing up for a landmark launch in the second half of 2026 with its first rack-scale GPU cluster, the Instinct MI450X IF128. This new platform is expected to be built on a 3 nm-class TSMC process and use advanced CoWoS‑L packaging, enabling each MI450X IF128 card to feature at least 288 GB of HBM4 memory. The HBM4 memory will provide up to 18 TB/s of bandwidth, with each GPU targeting approximately 50 PetaFLOPS of FP4 compute performance and consuming between 1.6 and 2.0 kW of power. AMD’s recent strategic split in its Instinct MI400 series differentiates the MI430X for high-performance computing and the MI450X for Artificial Intelligence, revealing tailored advancements for each sector.

The Artificial Intelligence-optimized MI450X comes in two configurations: ´IF64´ for conventional single-rack deployments, and the high-density ´IF128´ for large-scale installations. The IF128 system connects 128 GPUs using an Ethernet-based Infinity Fabric network. Notably, it replaces PCIe with UALink to link each GPU directly to three integrated Pensando 800 GbE network interface cards, achieving approximately 1.8 TB/s of unidirectional bandwidth per GPU and a staggering 2,304 TB/s total bandwidth across the rack.

With this architecture, the MI450X IF128 delivers a cumulative 6,400 PetaFLOPS of FP4 compute and a total of 36.9 TB of high-bandwidth memory. In comparison, the MI450X IF64 offers about half those capabilities. AMD´s advances aim to surpass NVIDIA’s anticipated ´Vera Rubin´ VR200 NVL144, which is projected to top out at 3,600 PetaFLOPS and 936 TB/s, thus providing AMD with a significant performance advantage. This leadership is expected to hold until NVIDIA’s future VR300 ‘Ultra’ NVL576 emerges, featuring 144 GPUs with four compute dies each for extreme scaling. AMD’s forthcoming rack-scale system signals a bold challenge in the ongoing race to equip Artificial Intelligence data centers with ever-more-powerful and efficient GPU clusters.

77

Impact Score

Artificial Intelligence could predict who will have a heart attack

Startups are using Artificial Intelligence to mine routine chest CT scans for hidden signs of heart disease, potentially flagging high-risk patients who are missed today. The approach shows promise but faces unanswered clinical, operational, and reimbursement questions.

Science acquires retina implant enabling artificial vision

Science Corporation bought the PRIMA retina implant out of Pixium Vision’s collapse and is seeking approval to market it. Early trials suggest the device can restore enough artificial vision for some patients to read text and even do crosswords.

California delays its Artificial Intelligence Transparency Act and passes new content laws

California enacted AB 853, pushing the Artificial Intelligence Transparency Act’s start date to August 2, 2026, and adding new disclosure and detection duties for generative Artificial Intelligence providers, large platforms, and device makers. Platforms face standardized source data checks and latent disclosures in 2027, with capture devices offering similar options in 2028.

Level 4 autonomous driving nears commercial reality

Level 4 autonomous vehicles are moving closer to deployment as recent advances in Artificial Intelligence reshape the self-driving stack. Foundation models, end-to-end learning, and large-scale simulation are central to the shift.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.