Nvidia is using its existing DGX SuperPOD architecture to prepare for large-scale system deployments built on the newly introduced Rubin platform, which the company describes as the next leap forward in artificial intelligence computing. Unveiled at the CES trade show in Las Vegas, the Rubin platform is presented as the basis for an artificial intelligence supercomputer aimed at demanding workloads such as agentic artificial intelligence, mixture-of-experts models and long-context reasoning.
The Rubin platform consists of six new chips that are designed to work together as a single cohesive system. The six chips are identified as the Nvidia Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet Switch, and they are tied together through what Nvidia calls an advanced codesign approach. According to the company, this integration is engineered to accelerate training and to reduce the cost of inference token generation by tightly aligning compute, networking and data processing components.
Within this strategy, DGX SuperPOD remains the foundational design for deploying Rubin based systems across both enterprise and research environments, acting as a blueprint for end-to-end infrastructure. Nvidia positions the DGX platform as addressing the entire technology stack, from Nvidia computing to networking to software, so that customers can adopt it as a single system rather than assembling and integrating disparate parts. Jensen Huang, founder and CEO of Nvidia, said that Rubin arrives at exactly the right moment, as artificial intelligence computing demand for both training and inference is described as going through the roof, and he frames DGX SuperPOD and Rubin together as a way for organizations to focus on artificial intelligence innovation and business results instead of infrastructure complexity.
