AMD unveils Instinct MI350X series artificial intelligence GPU

AMD launches the Instinct MI350X, a next-generation artificial intelligence GPU designed to challenge NVIDIA, powered by the new CDNA 4 architecture.

AMD has officially revealed its Instinct MI350X series GPU, targeting the high-performance artificial intelligence market. The MI350X is built on AMD´s latest CDNA 4 compute architecture, positioning it squarely against NVIDIA´s B200 ´Blackwell´ lineup, particularly with the top-tier Instinct MI355X compared directly to the B200. This launch not only introduces the new silicon architecture but also debuts ROCm 7, AMD´s fresh software stack, as well as a hardware ecosystem adhering to the Open Compute Project specification. Collectively, this ecosystem brings together AMD´s EPYC Zen 5 CPUs, MI350 series GPUs, Pensando Pollara Ultra-Ethernet-capable NICs, and standards-aligned racks and nodes for both air- and liquid-cooled deployments.

The Instinct MI350 is defined by its complex, chiplet-based architecture and stacked silicon design. Central to the GPU are two I/O dies (IODs), engineered using the 6 nm TSMC N6 process, which orchestrate the connectivity for up to four Accelerator Compute Die (XCD) tiles each. These XCDs, fabricated on the advanced 3 nm TSMC N3P node, each house a 4 MB L2 cache and encompass four shader engines with a total of 36 compute units (CU) per XCD. With four XCDs stacked per IOD, each IOD contains 144 CUs, summing to a remarkable total of 288 CUs across the package.

Supporting this computational muscle, each IOD manages four HBM3E memory stacks, amounting to 144 GB per IOD and an impressive 288 GB of high-speed memory for the complete package. Interconnectivity between the two IODs is achieved via a 5.5 TB/s bidirectional link, ensuring full cache coherency and rapid data movement. Additional hardware features include 256 MB of Infinity Cache, a robust Infinity Fabric interface, and a PCI-Express 5.0 x16 root complex, underscoring the platform´s readiness for demanding artificial intelligence workloads in diverse data center environments.

70

Impact Score

SK hynix debuts 1c LPDDR6 memory with 16 Gb capacity and higher speeds

SK hynix has developed 1c-node LPDDR6 memory with 16 Gb capacity, targeting speeds beyond 10.7 Gbps and improved power efficiency for next-generation devices. The company plans to start mass production in the first half of the year and ship to customers in the second half.

Nvidia debuts rtx mega geometry with next gen ray tracing demos

Nvidia introduced rtx mega geometry at gdc 2026 alongside its geforce rtx 50 series, showcasing new techniques for handling extreme geometric detail in ray traced scenes. Early demos in alan wake 2 and the witcher 4 highlight performance gains and memory savings from nested triangle clusters.

Nvidia and Thinking Machines form gigawatt scale Artificial Intelligence partnership

Nvidia and Thinking Machines Lab have entered a multiyear deal to deploy at least one gigawatt of next generation Vera Rubin systems for frontier Artificial Intelligence model training and customizable platforms. The partnership combines major infrastructure commitments with a strategic investment to expand access to frontier and open models for enterprises, researchers and scientists.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.