AMD unveils first MLPerf 5.1 Artificial Intelligence training results on Instinct MI350 series

AMD submitted the first MLPerf 5.1 training benchmarks using its Instinct MI350 Series GPUs, including the MI355X and MI350X, marking the first public Artificial Intelligence training results for the new accelerator family.

AMD announced the first MLPerf 5.1 Training submission that uses the Instinct MI350 Series GPUs. The submission covers both the MI355X and MI350X models and represents the first public benchmark of the Instinct MI350 Series for Artificial Intelligence training workloads. The release follows AMD’s earlier MLPerf 5.1 Inference results and extends the company’s public performance data into training scenarios.

The MLPerf 5.1 Training results highlight generational performance gains for the Instinct MI350 Series. According to the announcement, the benchmarks demonstrate clear progress in scalability, efficiency, and compute performance across several of today’s most demanding Artificial Intelligence training workloads. AMD also emphasized broad ecosystem participation in the submission, indicating that multiple software and system partners were involved in validating the training runs on the new hardware.

By publicizing these MLPerf 5.1 Training results, AMD positions the Instinct MI350 Series as a platform aimed at accelerating the development of next-generation generative Artificial Intelligence models. The company framed the results as evidence that the MI355X and MI350X deliver the compute and efficiency characteristics needed for large-scale training. Overall, the submission serves both as a milestone for the Instinct MI350 Series and as a continuation of AMD’s benchmarking efforts across inference and training workloads.

55

Impact Score

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.