Networking for Artificial Intelligence: building the foundation for real-time intelligence

At the 2025 Ryder Cup, HPE deployed an on-site private-cloud network and operational dashboard to support real-time decisions for nearly a quarter million attendees. The deployment highlights how networking must evolve to support Artificial Intelligence inference, edge processing, and self-driving network operations.

The 2025 Ryder Cup at Bethpage Black in Farmingdale, New York, offered a high-profile test of networking built for Artificial Intelligence. Nearly a quarter million spectators and a flood of connected devices required a central operations hub created by HPE. That Connected Intelligence Center ingested ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator queues, and network performance feeds, and it combined inputs from 67 Artificial Intelligence-enabled cameras into a private-cloud dashboard to give staff an instantaneous operational view. Jon Green, CTO of HPE Networking, framed the deployment as proof that “disconnected Artificial Intelligence doesn’t get you very much; you need a way to get data into it and out of it for both training and inference.”

Engineers addressed the venue’s density and mobility challenges with a two-tiered architecture. A front-end layer of more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors collected live video and movement data, while a back-end layer in a temporary on-site data center linked GPUs and servers in a high-speed, low-latency configuration that served as the system’s brain. That back end fed a private-cloud Artificial Intelligence cluster for live analytics and allowed models to process footage and surface the most interesting shots. The article emphasizes that networks for Artificial Intelligence must deliver ultra-low latency, lossless throughput, and adaptability at scale because inference workloads are gated by the slowest calculation in the pipeline.

The piece also situates the Ryder Cup example within broader industry trends. An HPE cross-industry survey of 1,775 IT leaders found 45 percent can run real-time data pushes and pulls now, up from 7 percent in 2024, yet many organizations still struggle to operationalize data pipelines. The rise of physical Artificial Intelligence is prompting operational repatriation to edge and on-premises clusters for faster, safer inference in contexts like self-driving vehicles and factory floors. HPE’s telemetry practice, which processes more than a trillion telemetry points daily, feeds AIOps models that already surface recommendations and may someday enable self-driving networks that automate routine fixes and mass configuration changes. The article concludes that network performance increasingly defines business performance and that building inference-ready networks will separate pilots from scaled Artificial Intelligence deployments.

58

Impact Score

The new dictionary of Artificial Intelligence reliability

As organizations move models from experimentation to production, the question shifts from “can we build it?” to “can we trust it?” This field guide defines the terms that shape Artificial Intelligence reliability across performance, data quality, system reliability, explainability, operations, and governance.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.