NVIDIA unveils advanced artificial intelligence models and tools for autonomous vehicles

NVIDIA launches the Cosmos Predict-2 world model and new developer resources, accelerating autonomous vehicle innovation with state-of-the-art artificial intelligence tools.

NVIDIA has announced the release of Cosmos Predict-2, an upgraded world foundation model designed to push the boundaries of autonomous vehicle (AV) development through enhanced synthetic data generation and future world state prediction. This model, central to the NVIDIA Cosmos platform, interprets text and visual prompts with increased fidelity, leading to more accurate and context-aware video generation for AV training and validation. Cosmos Predict-2 works seamlessly with powerful hardware, including NVIDIA´s GB200 NVL72 systems and DGX Cloud, enabling significantly faster data synthesis to keep up with the growing complexity of modern, end-to-end AV architectures.

Cosmos Predict-2´s post-training capabilities allow developers to fine-tune the model with real-world AV data, resulting in tailored video outputs that precisely mirror physical environments and specific driving scenarios. Unique to Cosmos is its capacity to generate multi-camera perspectives from widely available dashcam footage, unlocking new, expansive data streams for developers. The NVIDIA Research team has showcased how models post-trained on 20,000 hours of driving data can dramatically improve AV performance in difficult situations, such as inclement weather, by producing high-quality, multi-view synthetic videos that can also substitute real camera inputs when sensors are obstructed or fail.

Industry adoption is robust, with leaders like Plus and Oxa leveraging Cosmos models for rapid scenario generation and robust synthetic datasets, expediting commercial AV readiness. NVIDIA also introduced additional tools, including the Cosmos Transfer NIM microservice, which generates photorealistic videos from structured simulations, and the NuRec Fixer model, which repairs gaps in reconstructed AV data. Integration with CARLA, the premier open-source AV simulator, allows over 150,000 developers to harness these models for rendering high-fidelity simulation scenes with adaptable weather, lighting, and terrain, powered by open datasets from NVIDIA. The recent CARLA release incorporates Cosmos Transfer and neural reconstruction interfaces, supporting dynamic model training pipelines that underpinned NVIDIA Research’s repeat victory at the CVPR End-to-End Autonomous Grand Challenge.

To further address AV safety, NVIDIA’s Halos platform brings together holistic safety solutions by combining advanced hardware, software, and artificial intelligence research for autonomous driving. New partners such as Bosch, Easyrain, and Nuro have joined the Halos AI Systems Inspection Lab, where their integration efforts with NVIDIA’s stack are rigorously tested for operational robustness. With expanding developer tools, real-world adoption, and a reinforced focus on safety, NVIDIA continues to accelerate the pace of innovation in autonomous vehicle technologies.

81

Impact Score

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Perplexity launches Computer to orchestrate many Artificial Intelligence models

Perplexity is rolling out Computer, a cloud-based agent that coordinates 19 Artificial Intelligence models for complex workflows, as it pivots toward high-value enterprise users and deep research. The launch underscores a broader bet on multi-model orchestration, custom benchmarks and a boutique business strategy over mass adoption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.