OpenAI launched GPT-5.2, which it describes as its most capable model series yet for professional knowledge work, and the company trained and deployed the model on NVIDIA infrastructure, including NVIDIA Hopper and GB200 NVL72 systems. NVIDIA positions this as the latest example of leading Artificial Intelligence builders relying on its full-stack infrastructure platform, spanning accelerators, advanced networking and optimized software to train and deploy frontier models at massive scale. NVIDIA frames pretraining and post-training as the bedrock of intelligence in conjunction with test-time scaling, and emphasizes that training frontier models from scratch can require tens of thousands, even hundreds of thousands, of GPUs working together effectively.
NVIDIA highlights performance gains across its newer systems, stating that compared with the NVIDIA Hopper architecture, NVIDIA GB200 NVL72 systems delivered 3x faster training performance on the largest model tested in the latest MLPerf Training industry benchmarks, and nearly 2x better performance per dollar. NVIDIA also says that NVIDIA GB300 NVL72 delivers a more than 4x speedup compared with NVIDIA Hopper, and it argues that these improvements allow Artificial Intelligence developers to shorten development cycles and push new models into production more quickly. The company notes that the majority of today’s leading large language models were trained on NVIDIA platforms, and it stresses support for multimodal Artificial Intelligence workloads, including speech, image and video generation, alongside emerging domains such as biology and robotics.
NVIDIA points to specific models that rely on its hardware, such as Evo 2 for decoding genetic sequences, OpenFold3 for predicting 3D protein structures and Boltz-2 for simulating drug interactions, as well as NVIDIA Clara synthesis models that generate realistic medical images to improve screening and diagnosis without exposing patient data. Creative and interactive Artificial Intelligence companies like Runway and Inworld are also cited, with Runway’s Gen-4.5 video generation model and its GWM-1 general world model both trained on NVIDIA Blackwell GPUs and optimized for that platform. NVIDIA underlines its breadth by noting that it submitted results across all seven MLPerf Training 5.1 benchmarks and was the only platform to participate in every category, and it says this versatility helps data centers use resources more efficiently. The company continues by stating that Artificial Intelligence labs such as Black Forest Labs, Cohere, Mistral, OpenAI, Reflection and Thinking Machines Lab are training on the NVIDIA Blackwell platform, which is now widely available from major cloud providers and NVIDIA Cloud Partners, including Amazon Web Services, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Nebius, Oracle Cloud Infrastructure and Together AI, as well as in NVIDIA Blackwell Ultra variants rolling out from server makers and cloud services.
