OpenAI GPT-5.2 launch underscores NVIDIA’s role in large-scale Artificial Intelligence infrastructure

OpenAI’s new GPT-5.2 model for professional knowledge work was trained and deployed on NVIDIA’s latest accelerators, highlighting how frontier Artificial Intelligence systems increasingly depend on NVIDIA’s full-stack infrastructure and benchmark performance gains.

OpenAI launched GPT-5.2, which it describes as its most capable model series yet for professional knowledge work, and the company trained and deployed the model on NVIDIA infrastructure, including NVIDIA Hopper and GB200 NVL72 systems. NVIDIA positions this as the latest example of leading Artificial Intelligence builders relying on its full-stack infrastructure platform, spanning accelerators, advanced networking and optimized software to train and deploy frontier models at massive scale. NVIDIA frames pretraining and post-training as the bedrock of intelligence in conjunction with test-time scaling, and emphasizes that training frontier models from scratch can require tens of thousands, even hundreds of thousands, of GPUs working together effectively.

NVIDIA highlights performance gains across its newer systems, stating that compared with the NVIDIA Hopper architecture, NVIDIA GB200 NVL72 systems delivered 3x faster training performance on the largest model tested in the latest MLPerf Training industry benchmarks, and nearly 2x better performance per dollar. NVIDIA also says that NVIDIA GB300 NVL72 delivers a more than 4x speedup compared with NVIDIA Hopper, and it argues that these improvements allow Artificial Intelligence developers to shorten development cycles and push new models into production more quickly. The company notes that the majority of today’s leading large language models were trained on NVIDIA platforms, and it stresses support for multimodal Artificial Intelligence workloads, including speech, image and video generation, alongside emerging domains such as biology and robotics.

NVIDIA points to specific models that rely on its hardware, such as Evo 2 for decoding genetic sequences, OpenFold3 for predicting 3D protein structures and Boltz-2 for simulating drug interactions, as well as NVIDIA Clara synthesis models that generate realistic medical images to improve screening and diagnosis without exposing patient data. Creative and interactive Artificial Intelligence companies like Runway and Inworld are also cited, with Runway’s Gen-4.5 video generation model and its GWM-1 general world model both trained on NVIDIA Blackwell GPUs and optimized for that platform. NVIDIA underlines its breadth by noting that it submitted results across all seven MLPerf Training 5.1 benchmarks and was the only platform to participate in every category, and it says this versatility helps data centers use resources more efficiently. The company continues by stating that Artificial Intelligence labs such as Black Forest Labs, Cohere, Mistral, OpenAI, Reflection and Thinking Machines Lab are training on the NVIDIA Blackwell platform, which is now widely available from major cloud providers and NVIDIA Cloud Partners, including Amazon Web Services, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Nebius, Oracle Cloud Infrastructure and Together AI, as well as in NVIDIA Blackwell Ultra variants rolling out from server makers and cloud services.

60

Impact Score

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.