Telecommunications networks are emerging as a new layer for distributed Artificial Intelligence inference as operators repurpose existing edge infrastructure into interconnected Artificial Intelligence grids. Announcements at NVIDIA GTC 2026 showed operators in the U.S. and Asia using their network footprint to deliver and monetize new Artificial Intelligence services across the edge. Some are activating existing wired edge sites first, while others are combining edge inference with Artificial Intelligence-RAN deployments that integrate Artificial Intelligence into the radio access network.
Telcos and distributed cloud providers operate about 100,000 distributed network data centers worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new Artificial Intelligence capacity over time. The approach turns existing real estate, power and connectivity into a geographically distributed computing platform that runs Artificial Intelligence inference closer to users, devices and data, where response and cost per token align best. The shift positions telecom networks as an active delivery layer for Artificial Intelligence rather than only a transport layer for traffic.
Major operators are taking different approaches. AT&T, which has over 100 million connections across thousands of device types, is working with Cisco and NVIDIA on an Artificial Intelligence grid for IoT and mission-critical applications at the network edge. Comcast is building its broadband footprint into an Artificial Intelligence grid for conversational agents, interactive media and cloud gaming. Spectrum said its network can support a grid spanning more than 1,000 edge data centers and hundreds of megawatts of capacity less than 10 milliseconds away from 500 million devices, starting with remote GPU rendering for media production. Akamai is expanding Akamai Inference Cloud across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Indosat Ooredoo Hutchison is linking its sovereign Artificial Intelligence factory with distributed edge and Artificial Intelligence-RAN sites across Indonesia, while T-Mobile is exploring edge Artificial Intelligence applications on distributed network locations using NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
Application partners are using these grids for low-latency, token-intensive services. Personal AI is using NVIDIA Riva to run conversational agents with sub-500 millisecond end-to-end latency and over 50% lower cost-per-token. Linker Vision is processing thousands of camera feeds across distributed edge sites, enabling up to 10x faster traffic accident detection, 15x faster disaster response and sub-minute alerts for unsafe crowd behavior. Decart is running its Lucy models at the network edge with sub-12-millisecond network latency to support interactive video streams and overlays that adapt in real time to each viewer.
NVIDIA is framing the ecosystem through its Artificial Intelligence Grid Reference Design, which outlines the hardware, networking and software components needed to deploy and orchestrate distributed Artificial Intelligence. Cisco, HPE, Armada, Rafay and Spectro Cloud are among the companies building systems and control-plane software around the design. The goal is to help telecom operators and distributed cloud providers turn the network edge into a unified intelligence layer that can run, scale and monetize Artificial Intelligence workloads.
