AMD has revealed its new Instinct MI350 series GPUs, positioning them as a superior and cost-effective alternative to Nvidia´s highest-end Blackwell-based chips in the artificial intelligence data center market. Announced at the company´s Advancing AI event, the upcoming MI355X and MI350X GPUs tout a substantial 288 GB of HBM3e memory—outclassing Nvidia´s B200 and GB200 Superchip offerings—and are slated for launch in the third quarter of the year with support from leading OEM and cloud partners like Dell Technologies, Hewlett Packard Enterprise, Cisco Systems, Oracle, and Supermicro. AMD´s strategy includes continued expansion of its partner ecosystem as excitement builds for forthcoming MI400 series products coming next year.
Technical specifications highlight the MI350 series´ edge: built with TSMC´s 3-nanometer process, the chips pack 185 billion transistors and deliver up to 20 petaflops of peak 6-bit floating point (FP6) and FP4 performance. According to AMD, these performance figures place the MI355X ahead of Nvidia´s latest by significant margins for certain workloads. For example, the GPU claims roughly 20–30% better inference throughput than Nvidia´s B200 on key reference models such as DeepSeek R1 and Llama 3.1. Further, for floating point and fine-tuning tasks, the MI355X outpaces Nvidia´s top chips by up to 13%, leveraging open-source frameworks like SGLang and vLLM for results rather than Nvidia´s more proprietary TensorRT-LLM framework.
In addition to standalone performance improvements, AMD is pushing real-world deployment advantages with new rack-scale solutions that pair MI350 GPUs with its fifth-generation EPYC CPUs and Pollara NICs. The most powerful configuration allows for 128 MI355X GPUs with 36 TB of HBM3e memory, reaching a staggering 2.6 exaflops in FP4 compute for intensive artificial intelligence applications. Other configurations include both liquid- and air-cooled options for varying needs and scales. This focus on high memory, economic inference pricing, and token-per-dollar value is central to AMD´s competitive pitch, directly targeting key concerns among enterprise and cloud providers as Nvidia maintains sizable revenue leads in the market. The company emphasizes a ´relentless annual innovation cadence´ as it pushes to close the gap in artificial intelligence accelerator leadership.