NVIDIA Blackwell achieves record performance in latest MLPerf Training results

NVIDIA´s Blackwell architecture posts unmatched results across all MLPerf Training v5.0 benchmarks, showcasing its leadership for next-generation Artificial Intelligence workloads.

NVIDIA´s Blackwell architecture has set a new performance standard in the latest round of MLPerf Training benchmarks, delivering the top results across every workload at scale. The achievement marks the twelfth instance of the MLPerf Training evaluations since their inception in 2018, and further cements Blackwell´s position in enabling rapid development and deployment of sophisticated Artificial Intelligence applications worldwide.

The NVIDIA platform stands out as the only entrant to report results on every MLPerf Training v5.0 benchmark. Its capabilities were especially highlighted in the large language model category with the successful pretraining of Llama 3.1 405B, the benchmark suite´s most demanding test. The scope of NVIDIA´s performance spans a broad spectrum, engaging large language models, recommendation systems, multimodal processing, object detection, and graph neural network tasks. This comprehensive showing was enabled by two advanced supercomputers powered by the Blackwell architecture: Tyche, featuring rack-scale NVIDIA GB200 NVL72 systems, and Nyx, based on NVIDIA DGX B200 systems.

Collaboration was key to these achievements, as NVIDIA joined forces with CoreWeave and IBM to submit additional results utilizing the same GB200 NVL72 designs. The joint efforts employed a formidable total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs, reflecting both the scalability and versatility of the Blackwell-based systems. By powering every entry in the toughest MLPerf Training benchmarks and maintaining leadership across the board, NVIDIA underscores its pivotal role in accelerating Artificial Intelligence infrastructure for a new era of computational demands.

75

Impact Score

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.