NVIDIA Blackwell achieves record performance in latest MLPerf Training results

NVIDIA´s Blackwell architecture posts unmatched results across all MLPerf Training v5.0 benchmarks, showcasing its leadership for next-generation Artificial Intelligence workloads.

NVIDIA´s Blackwell architecture has set a new performance standard in the latest round of MLPerf Training benchmarks, delivering the top results across every workload at scale. The achievement marks the twelfth instance of the MLPerf Training evaluations since their inception in 2018, and further cements Blackwell´s position in enabling rapid development and deployment of sophisticated Artificial Intelligence applications worldwide.

The NVIDIA platform stands out as the only entrant to report results on every MLPerf Training v5.0 benchmark. Its capabilities were especially highlighted in the large language model category with the successful pretraining of Llama 3.1 405B, the benchmark suite´s most demanding test. The scope of NVIDIA´s performance spans a broad spectrum, engaging large language models, recommendation systems, multimodal processing, object detection, and graph neural network tasks. This comprehensive showing was enabled by two advanced supercomputers powered by the Blackwell architecture: Tyche, featuring rack-scale NVIDIA GB200 NVL72 systems, and Nyx, based on NVIDIA DGX B200 systems.

Collaboration was key to these achievements, as NVIDIA joined forces with CoreWeave and IBM to submit additional results utilizing the same GB200 NVL72 designs. The joint efforts employed a formidable total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs, reflecting both the scalability and versatility of the Blackwell-based systems. By powering every entry in the toughest MLPerf Training benchmarks and maintaining leadership across the board, NVIDIA underscores its pivotal role in accelerating Artificial Intelligence infrastructure for a new era of computational demands.

75

Impact Score

Why multimodal content pipelines are reshaping media production

Multimodal content creation pipelines are consolidating text, image, and audio workflows into integrated systems that compress production timelines and expand monetization options, while raising fresh legal and ethical challenges. The article examines the tools, economics, and skills driving this shift for tens of millions of creators.

Semiconductor coverage tracks geopolitics, telecom chips and Artificial Intelligence demand

Light Reading’s semiconductor section brings together coverage of geopolitical risks in chip supply, telecom silicon shakeups and surging Artificial Intelligence infrastructure demand, with a strong focus on how these forces reshape vendors such as Intel, Nvidia, Qualcomm, Samsung and Nokia. The stream highlights how shifts in rare earths policy, network silicon strategy and massive memory orders are redefining the broader communications and computing ecosystem.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.