NVIDIA is presenting a coordinated hardware and software architecture designed to build physical Artificial Intelligence systems that perceive, reason and act in the real world. The company frames physical Artificial Intelligence as distinct from agentic systems that live only in digital environments. To bridge the gap between 2D and 3D understanding, NVIDIA bundles three classes of computers: DGX supercomputers for model training; Omniverse and Cosmos running on RTX PRO servers for simulation and synthetic data generation; and Jetson AGX Thor as the on-robot runtime for inference and control.
The training tier centers on NVIDIA DGX, a supercomputing platform meant for pretraining large foundation models and post-training new robot policies. Developers can start from open models such as Cosmos foundation models or Isaac GR00T and then scale training on DGX infrastructure. The middle tier targets the data and safety problem. Real-world robot data is scarce and expensive, so Omniverse plus Cosmos on RTX PRO servers produces physically based synthetic data, supports digital twins and enables extensive software-in-the-loop testing with Isaac Sim and Isaac Lab. This reduces risk by letting teams iterate at scale in simulated factories, warehouses and outdoor environments.
At runtime, NVIDIA positions Jetson AGX Thor as the compact, energy-aware computer that runs multimodal reasoning models on board robots. It is intended to handle vision, language and control policies at millisecond latency so robots can interact safely and responsively with people and physical surroundings. The stack is designed to support a wide range of embodiments, from autonomous mobile robots and manipulator arms to humanoids, which NVIDIA and partners see as a general-purpose form factor for human-centric environments.
Industry uptake is already visible. Companies such as universal robots, rgo robotics, boston dynamics, fourier and galbot use parts of the NVIDIA stack for development, simulation or deployment. NVIDIA also highlights ´Mega´, a blueprint for factory digital twins that lets enterprises populate virtual plants with robot brains and test fleet behaviors before real-world rollouts. Together, the three-computer approach aims to accelerate robot foundation models, shrink the sim-to-real gap and make physical Artificial Intelligence safer, faster and more scalable across manufacturing, logistics, service and healthcare.