NVIDIA and telecom operators push distributed Artificial Intelligence grids

Telecom operators in the U.S. and Asia are turning distributed network infrastructure into Artificial Intelligence grids for edge inference and new services. The model aims to bring compute closer to users, devices and data while improving latency, control and cost efficiency.

Telecommunications networks are emerging as a new layer for distributed Artificial Intelligence inference as operators repurpose existing edge infrastructure into interconnected Artificial Intelligence grids. Announcements at NVIDIA GTC 2026 showed operators in the U.S. and Asia using their network footprint to deliver and monetize new Artificial Intelligence services across the edge. Some are activating existing wired edge sites first, while others are combining edge inference with Artificial Intelligence-RAN deployments that integrate Artificial Intelligence into the radio access network.

Telcos and distributed cloud providers operate about 100,000 distributed network data centers worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new Artificial Intelligence capacity over time. The approach turns existing real estate, power and connectivity into a geographically distributed computing platform that runs Artificial Intelligence inference closer to users, devices and data, where response and cost per token align best. The shift positions telecom networks as an active delivery layer for Artificial Intelligence rather than only a transport layer for traffic.

Major operators are taking different approaches. AT&T, which has over 100 million connections across thousands of device types, is working with Cisco and NVIDIA on an Artificial Intelligence grid for IoT and mission-critical applications at the network edge. Comcast is building its broadband footprint into an Artificial Intelligence grid for conversational agents, interactive media and cloud gaming. Spectrum said its network can support a grid spanning more than 1,000 edge data centers and hundreds of megawatts of capacity less than 10 milliseconds away from 500 million devices, starting with remote GPU rendering for media production. Akamai is expanding Akamai Inference Cloud across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Indosat Ooredoo Hutchison is linking its sovereign Artificial Intelligence factory with distributed edge and Artificial Intelligence-RAN sites across Indonesia, while T-Mobile is exploring edge Artificial Intelligence applications on distributed network locations using NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

Application partners are using these grids for low-latency, token-intensive services. Personal AI is using NVIDIA Riva to run conversational agents with sub-500 millisecond end-to-end latency and over 50% lower cost-per-token. Linker Vision is processing thousands of camera feeds across distributed edge sites, enabling up to 10x faster traffic accident detection, 15x faster disaster response and sub-minute alerts for unsafe crowd behavior. Decart is running its Lucy models at the network edge with sub-12-millisecond network latency to support interactive video streams and overlays that adapt in real time to each viewer.

NVIDIA is framing the ecosystem through its Artificial Intelligence Grid Reference Design, which outlines the hardware, networking and software components needed to deploy and orchestrate distributed Artificial Intelligence. Cisco, HPE, Armada, Rafay and Spectro Cloud are among the companies building systems and control-plane software around the design. The goal is to help telecom operators and distributed cloud providers turn the network edge into a unified intelligence layer that can run, scale and monetize Artificial Intelligence workloads.

66

Impact Score

NVIDIA expands local agent computing with RTX PCs and DGX Spark

NVIDIA used GTC to highlight new open models, local agent software, and fine-tuning tools aimed at running agentic Artificial Intelligence workloads on RTX PCs and DGX Spark. The announcements focus on privacy, lower operating costs, and better local performance for personal assistants and creative applications.

NVIDIA RTX systems connect to Apple Vision Pro

NVIDIA and Apple are bringing native integration of NVIDIA CloudXR 6.0 to visionOS. The move enables secure streaming of NVIDIA RTX-powered simulators and professional 3D graphics applications to Apple Vision Pro.

OpenAI’s Pentagon access and xAI’s Grok lawsuit lead the day

OpenAI’s decision to give the Pentagon access to its Artificial Intelligence is raising questions about how quickly generative systems could move into military operations. Meanwhile, xAI is facing a lawsuit alleging Grok enabled the creation of child sexual abuse material.

NVIDIA details DLSS 5 image quality goals

NVIDIA says DLSS 5 is designed to deliver real-time neural rendering while preserving the visual direction developers intended for each frame. The technology combines lighting, material, and temporal improvements to keep enhanced images consistent with game content.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.