NVIDIA links data centers into a unified Artificial Intelligence supercomputer with Spectrum-XGS Ethernet

NVIDIA unveiled Spectrum-XGS Ethernet to interconnect multiple geographically separated data centers into a single giga-scale Artificial Intelligence super-factory. The platform promises distance-aware networking that delivers predictable low-latency performance across campuses, cities, and continents.

Data center networking is central to distributed computing and to future Artificial Intelligence workloads that may span millions of GPUs. NVIDIA introduced Spectrum-XGS Ethernet as an extension of its Spectrum-X networking platform designed to link multiple, geographically separated data centers into a unified, giga-scale Artificial Intelligence super-factory. The company said Spectrum-XGS removes the capacity limits of single facilities by adding distance-aware networking, which aims to provide predictable, low-latency performance across campuses, cities, and continents.

The changes are delivered primarily through software and firmware updates to existing Spectrum-X switches and ConnectX SuperNICs rather than through new silicon. Spectrum-XGS includes auto-adjusted congestion control tuned for long-haul links, precise latency management to reduce jitter, and comprehensive end-to-end telemetry. That telemetry is intended to allow operators to visualize and control network traffic across multiple sites, giving visibility into cross-facility flows and making behavior across long distances more predictable for distributed workloads.

NVIDIA reported measurable performance improvements from the updates, saying Spectrum-XGS nearly doubles NCCL throughput for multi-GPU, multi-node training jobs and large-scale experiments. Those gains are presented as efficiency improvements for distributed Artificial Intelligence training and inference. NVIDIA positioned the technology as a new axis of growth for infrastructure, following scale-up inside servers and scale-out inside data centers with a new scale-across approach that connects facilities into unified compute fabrics as demand for massive distributed compute grows.

72

Impact Score

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.