Nvidia is advancing open source development for robotics and autonomy with a new suite of physical artificial intelligence models and frameworks that span the entire lifecycle from high fidelity simulation to edge deployment. Built around OpenUSD and Nvidia Omniverse, the stack standardizes 3D data across tools so developers can construct accurate digital twins once and reuse them from training to real world rollouts. At CES 2026, partners showcased how this physical artificial intelligence stack is leaving the lab for production environments, powering machines from heavy equipment and surgical robots to humanoids and social companions.
The stack connects Nvidia Cosmos world models, Isaac technologies such as the new Isaac Lab Arena for policy evaluation, the Nvidia Alpamayo portfolio for autonomous vehicles, and the Nvidia OSMO framework for orchestrating training across heterogeneous compute. Caterpillar’s Cat artificial intelligence Assistant uses Nvidia Nemotron open models for agentic artificial intelligence on Jetson Thor to bring natural language control and safety parameter adjustment into heavy vehicle cabs, informed by Omniverse based factory and job site digital twins. Lem Surgical’s FDA cleared Dynamis Robotic Surgical System relies on Jetson AGX Thor, Nvidia Holoscan and Isaac for Healthcare, plus Nvidia Cosmos Transfer and Isaac Sim digital twins, to train dual arm humanoid surgical robots that mimic human dexterity for complex spinal procedures.
Neura Robotics is training its 4NE1 humanoid and MiPA service robots with Isaac Sim, Isaac Lab and Isaac GR00T Mimic on OpenUSD digital twins, and is working with SAP and Nvidia using a Mega Omniverse Blueprint to validate Joule powered cognitive behaviors before deployment to its Neuraverse fleets. AgiBot builds its Genie Envisioner platform on Nvidia Cosmos Predict 2, Isaac Sim and Isaac Lab so action conditioned synthetic video policies transfer more reliably to Genie2 humanoids and Jetson Thor tabletop robots, while Intbot uses Nvidia Cosmos Reason 2 to give social robots reasoning vision language models that discern social cues and safety context. Nvidia has also introduced Agile, an Isaac Lab based loco manipulation engine that packages a sim to real verified workflow, so developers can use built in task configurations, Markov Decision Process models, training utilities and deterministic evaluation to train reinforcement learning policies and port whole body behaviors to platforms like Unitree G1 and LimX Dynamics TRON.
The company is deepening ties with open source ecosystems by integrating Isaac GR00T N models and simulation frameworks into Hugging Face’s LeRobot, enabling direct access to Isaac GR00T N1.6 and Isaac Lab Arena for streamlined policy training and evaluation. Hugging Face’s Reachy 2 humanoid is now interoperable with Nvidia Jetson Thor, so developers can deploy advanced vision language action models to physical robots. Robotis has built an open source sim to real pipeline with Isaac Sim, GR00T Mimic and a vision language action based Isaac GR00T N model that deploys straight to its hardware, creating a template for accelerating the transition from synthetic data and digital twins to robust real world robotic tasks. Nvidia is steering developers to technical blogs, tutorials, learning paths and the Cosmos Cookoff challenge to deepen skills and encourage broader experimentation with its open physical artificial intelligence toolchains.
