AMD has announced the Pensando Pollara 400 artificial intelligence NIC-Ready Server Platforms, a portfolio of server systems from partners that ship preconfigured with the AMD Pensando Pollara 400 artificial intelligence Network Interface Card. The goal is to give enterprises, cloud providers, and research organizations an Ethernet-based artificial intelligence networking stack that works out of the box for both front-end and back-end workloads. By combining established server designs, AMD compute, and the Pollara 400 card’s fully programmable 400G Ethernet, AMD is positioning these platforms as a faster path to standing up scalable artificial intelligence clusters.
The platforms integrate servers from vendors such as Celestica, Cisco, Compal, Dell, Gigabyte, HPE, Ingrasys (Foxconn), Mitac, QCT, Supermicro, and Wistron, each contributing their strengths in system design, integration, and support. Configurations span dense GPU training nodes and high-throughput inference servers, often combining AMD Epyc server CPUs, AMD Instinct GPU accelerators, and AMD Pensando Pollara 400 artificial intelligence NIC-based Ethernet fabrics. Networking partners provide Ultra Ethernet-ready or RoCE-based fabrics, while software and orchestration partners focus on making these systems operable at scale. AMD emphasizes that, unlike other artificial intelligence NICs, the Pollara 400 card is fully hardware and software programmable, so transport and congestion-control algorithms can be updated without replacing hardware, allowing tuning over time for new artificial intelligence workloads and shifting business priorities.
Within each platform, the AMD Pensando Pollara 400 artificial intelligence NIC is designed to deliver the networking intelligence that artificial intelligence jobs require. Its P4-programmable pipeline supports Ultra Ethernet Consortium features including intelligent packet spray, out-of-order packet handling with in-order message delivery, selective retransmission, and path-aware congestion control, all aimed at reducing artificial intelligence job runtimes, improving effective throughput for collective operations, and boosting network reliability through faster fault detection and recovery. Cisco highlights its collaboration with AMD as a way to combine Cisco Intelligent Packet Flow with Pollara 400 artificial intelligence NICs for intelligent load balancing and path-aware congestion control across frontend and backend environments, while Dell points to integration with Dell PowerSwitch using SONiC to deliver a high-performance, programmable Ethernet solution that adapts to evolving standards. Because the Pollara 400 artificial intelligence NIC targets open, standards-based Ethernet, including OCP 3.0 form factors and interoperability with a wide range of switches and optics, AMD argues that customers can scale artificial intelligence infrastructure while preserving choice, with the NIC’s programmability offering a path to future transport protocols and optimizations as industry standards advance.
