NVIDIA and telecom operators push distributed Artificial Intelligence grids

Telecom operators in the U.S. and Asia are turning distributed network infrastructure into Artificial Intelligence grids for edge inference and new services. The model aims to bring compute closer to users, devices and data while improving latency, control and cost efficiency.

Telecommunications networks are emerging as a new layer for distributed Artificial Intelligence inference as operators repurpose existing edge infrastructure into interconnected Artificial Intelligence grids. Announcements at NVIDIA GTC 2026 showed operators in the U.S. and Asia using their network footprint to deliver and monetize new Artificial Intelligence services across the edge. Some are activating existing wired edge sites first, while others are combining edge inference with Artificial Intelligence-RAN deployments that integrate Artificial Intelligence into the radio access network.

Telcos and distributed cloud providers operate about 100,000 distributed network data centers worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new Artificial Intelligence capacity over time. The approach turns existing real estate, power and connectivity into a geographically distributed computing platform that runs Artificial Intelligence inference closer to users, devices and data, where response and cost per token align best. The shift positions telecom networks as an active delivery layer for Artificial Intelligence rather than only a transport layer for traffic.

Major operators are taking different approaches. AT&T, which has over 100 million connections across thousands of device types, is working with Cisco and NVIDIA on an Artificial Intelligence grid for IoT and mission-critical applications at the network edge. Comcast is building its broadband footprint into an Artificial Intelligence grid for conversational agents, interactive media and cloud gaming. Spectrum said its network can support a grid spanning more than 1,000 edge data centers and hundreds of megawatts of capacity less than 10 milliseconds away from 500 million devices, starting with remote GPU rendering for media production. Akamai is expanding Akamai Inference Cloud across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Indosat Ooredoo Hutchison is linking its sovereign Artificial Intelligence factory with distributed edge and Artificial Intelligence-RAN sites across Indonesia, while T-Mobile is exploring edge Artificial Intelligence applications on distributed network locations using NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

Application partners are using these grids for low-latency, token-intensive services. Personal AI is using NVIDIA Riva to run conversational agents with sub-500 millisecond end-to-end latency and over 50% lower cost-per-token. Linker Vision is processing thousands of camera feeds across distributed edge sites, enabling up to 10x faster traffic accident detection, 15x faster disaster response and sub-minute alerts for unsafe crowd behavior. Decart is running its Lucy models at the network edge with sub-12-millisecond network latency to support interactive video streams and overlays that adapt in real time to each viewer.

NVIDIA is framing the ecosystem through its Artificial Intelligence Grid Reference Design, which outlines the hardware, networking and software components needed to deploy and orchestrate distributed Artificial Intelligence. Cisco, HPE, Armada, Rafay and Spectro Cloud are among the companies building systems and control-plane software around the design. The goal is to help telecom operators and distributed cloud providers turn the network edge into a unified intelligence layer that can run, scale and monetize Artificial Intelligence workloads.

66

Impact Score

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.