MLPerf Training v5.0 results show NVIDIA Blackwell doubles training speed, AMD outpaces NVIDIA in select benchmarks

MLCommons´ latest MLPerf Training v5.0 benchmarks reveal steep leaps in Artificial Intelligence infrastructure performance, with NVIDIA´s Blackwell chips doubling speed over Hopper and AMD´s Instinct MI325X outperforming NVIDIA in key tasks.

MLCommons, a consortium dedicated to benchmarking Artificial Intelligence infrastructure, has released the results of the MLPerf Training v5.0 benchmark on June 4, 2025. This critical industry standard assesses the training performance of hardware used for large-scale Artificial Intelligence workloads. Companies such as NVIDIA, AMD, and Intel develop specialized chips for these tasks, and major vendors, including Dell and Oracle, deploy this hardware in their infrastructure offerings. The MLPerf benchmarks gauge real-world training and, more recently, have shifted focus to the time required to train or adapt state-of-the-art foundation models, such as Llama 3.1 405B, replacing older tests like GPT-3 training.

Results released in this benchmark cycle highlight extraordinary progress over the past six months. Performance gains were particularly striking: for example, the time to train Stable Diffusion improved by 2.28 times, while Llama 2 70B training times accelerated by 2.10 times compared to last year’s MLPerf Training v4.1. Among the standouts, NVIDIA’s new Blackwell-generation chips more than doubled the training speed of their previous Hopper-generation, according to company-published comparison data. Notably, NVIDIA was the sole submitter to provide results across all MLPerf v5.0 testing categories, underscoring the company’s broad market penetration and extensive portfolio.

Meanwhile, AMD disclosed impressive results from its Instinct MI325X hardware, reporting up to 8% faster performance compared to NVIDIA’s H200 chip when tested on additional LoRA learning for the Llama 2 70B model. AMD also showed that the MI325X surpassed its own previous-generation Instinct MI300X chip by as much as 30%. Further, AMD presented uniform results across multiple vendor implementations, arguing its solution delivers consistently high performance. This round of benchmarks highlights escalating competition as both NVIDIA and AMD claim leadership in different metrics, and it signals a new era of rapid and diverse hardware innovation for Artificial Intelligence infrastructure. Full results and data are available on the MLCommons website, providing a resource for industry players comparing hardware capabilities.

77

Impact Score

EU digital omnibus on artificial intelligence: what is in it and what is not?

On November 19, 2025 the European Commission published a Digital Omnibus proposal intended to reduce administrative burdens and align rules across digital laws, including the Artificial intelligence Act. The package offers targeted simplifications but leaves several substantive industry concerns unaddressed.

Tether Data launches QVAC Fabric LLM for edge-first Artificial Intelligence inference and fine-tuning

Tether Data on December 2, 2025 released QVAC Fabric LLM, an edge-first LLM inference runtime and fine-tuning framework that runs and personalizes models on consumer GPUs, laptops, and smartphones. The open-source platform enables on-device Artificial Intelligence training and inference across iOS, Android, Windows, macOS, and Linux while avoiding cloud dependency and vendor lock-in.

French Artificial Intelligence startup Mistral unveils Mistral 3 open-source models

French Artificial Intelligence startup Mistral unveiled Mistral 3, a next-generation family of open-source models that includes small dense models 14B, 8B, and 3B and a larger sparse mixture-of-experts called Mistral Large 3. The company said the release represents its most capable model to date and noted Microsoft backing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.