Intel holds off on joining Artificial Intelligence RAN Alliance at MWC 2026

Intel is staying out of the Artificial Intelligence RAN Alliance for now, arguing that existing standards bodies and its Xeon CPUs already enable Artificial Intelligence in the radio access network. The stance underscores a broader industry debate over whether future RAN infrastructure really needs GPUs at the cell site.

SoftBank and Nvidia are leading the Artificial Intelligence RAN Alliance, which launched in 2024 with founding members including Amazon Web Services, Arm, DeepSig, Ericsson, Microsoft, Nokia, Northeastern University, Samsung Research and T-Mobile. The alliance has grown to 132 members, but Intel has chosen not to join at this stage, saying the decision is driven by practical considerations rather than opposition to Artificial Intelligence in the radio access network. Intel executives stress that Artificial Intelligence can already be deployed in live networks using current architectures and that the company is continuing to evaluate how new groups fit alongside existing work.

Intel points to established standards bodies such as 3GPP, the O-RAN Alliance, the Telecom Infrastructure Project and 6GIC as providing strong governance for network Artificial Intelligence and as mechanisms to avoid fragmentation. The company says it remains active across these forums and is assessing how the Artificial Intelligence RAN Alliance aligns with ongoing initiatives and where participation would make sense for the wider industry. Intel’s strategy centers on server-based RAN deployments powered by its latest Xeon processors, which it argues can deliver tangible Artificial Intelligence benefits without requiring additional specialized hardware at the cell site.

The Xeon 6 system-on-chip, which launched last year, includes built-in Artificial Intelligence acceleration through advanced matrix extensions and increases core counts to as many as 72 cores, up from 42 in an earlier release. Intel says that by having 72 cores, “we can reduce the number of servers so we go from more than one server per site to the possibility to have one server per site,” which it describes as a major improvement for total cost of energy, complexity and power consumption. This approach contrasts with Nvidia’s push for GPUs in RAN infrastructure, although Intel avoids framing the issue as a direct GPU versus CPU battle and acknowledges that GPUs play a very important role in Artificial Intelligence.

Industry analysts note that Artificial Intelligence has been used for years to optimize and improve the RAN, often through traditional machine learning, but questions remain over running non-RAN workloads on RAN infrastructure and where to place those workloads between the far edge and near edge. Proofs of concept have shown GPUs acting as a computing platform in place of systems-on-chips that combine specialized ASICs with multi-core CPUs, yet this remains a work in progress. Major vendors including Ericsson, Nokia, Huawei, Samsung, Qualcomm, Marvell, Intel, Nvidia and Arm hold differing views on the viability of GPUs in base stations, which hinge on finding monetizable non-RAN workloads, orchestrating them alongside a reliable RAN and achieving cost and energy efficiency. Intel maintains it will continue to engage across multiple forums such as the Telecom Infrastructure Project and 3GPP as Artificial Intelligence becomes pervasive throughout the telecom ecosystem.

50

Impact Score

HIVE launches Paraguay cloud cluster for Columbia University research

HIVE Digital Technologies has activated its BUZZ Artificial Intelligence Cloud platform in Asunción, Paraguay, with Columbia University researchers using the system for large language model training. The deployment is positioned as a proof of concept for scaling high-performance computing capacity in Paraguay.

Case for an anonymized Artificial Intelligence proxy

A proxy layer that anonymizes requests before they reach large language model providers is emerging as a possible foundation for privacy-focused Artificial Intelligence infrastructure. The approach aims to reduce data exposure while improving control, policy enforcement, and flexibility across providers.

Microsoft outlines next-gen DirectX ray tracing features

Microsoft has published a second DirectX Ray Tracing functional specification describing how its ray tracing pipeline is evolving. The update highlights clustered geometry, partitioned top-level acceleration structures, and GPU-driven acceleration structure operations aimed at improving efficiency in games.

Quantum machines launches open acceleration stack

Quantum Machines has introduced the Open Acceleration Stack to let users integrate any classical processor into a quantum control stack. The framework extends the company’s orchestration platform with low-latency links between its control hardware and accelerators from NVIDIA and AMD-class ecosystems.

Reco adds security controls for Artificial Intelligence agents

Reco has introduced a new capability aimed at giving security teams visibility into Artificial Intelligence agents and automation tools operating across SaaS environments. The move targets growing concerns over unmanaged agent activity, sensitive data access, and actions taken without direct human oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.