AMD launches Pensando Pollara 400 artificial intelligence NIC-ready server platforms

AMD has introduced Pensando Pollara 400 artificial intelligence NIC-Ready Server Platforms, a partner-driven ecosystem of Ethernet-based systems designed to accelerate deployment of scalable artificial intelligence clusters.

AMD has announced the Pensando Pollara 400 artificial intelligence NIC-Ready Server Platforms, a portfolio of server systems from partners that ship preconfigured with the AMD Pensando Pollara 400 artificial intelligence Network Interface Card. The goal is to give enterprises, cloud providers, and research organizations an Ethernet-based artificial intelligence networking stack that works out of the box for both front-end and back-end workloads. By combining established server designs, AMD compute, and the Pollara 400 card’s fully programmable 400G Ethernet, AMD is positioning these platforms as a faster path to standing up scalable artificial intelligence clusters.

The platforms integrate servers from vendors such as Celestica, Cisco, Compal, Dell, Gigabyte, HPE, Ingrasys (Foxconn), Mitac, QCT, Supermicro, and Wistron, each contributing their strengths in system design, integration, and support. Configurations span dense GPU training nodes and high-throughput inference servers, often combining AMD Epyc server CPUs, AMD Instinct GPU accelerators, and AMD Pensando Pollara 400 artificial intelligence NIC-based Ethernet fabrics. Networking partners provide Ultra Ethernet-ready or RoCE-based fabrics, while software and orchestration partners focus on making these systems operable at scale. AMD emphasizes that, unlike other artificial intelligence NICs, the Pollara 400 card is fully hardware and software programmable, so transport and congestion-control algorithms can be updated without replacing hardware, allowing tuning over time for new artificial intelligence workloads and shifting business priorities.

Within each platform, the AMD Pensando Pollara 400 artificial intelligence NIC is designed to deliver the networking intelligence that artificial intelligence jobs require. Its P4-programmable pipeline supports Ultra Ethernet Consortium features including intelligent packet spray, out-of-order packet handling with in-order message delivery, selective retransmission, and path-aware congestion control, all aimed at reducing artificial intelligence job runtimes, improving effective throughput for collective operations, and boosting network reliability through faster fault detection and recovery. Cisco highlights its collaboration with AMD as a way to combine Cisco Intelligent Packet Flow with Pollara 400 artificial intelligence NICs for intelligent load balancing and path-aware congestion control across frontend and backend environments, while Dell points to integration with Dell PowerSwitch using SONiC to deliver a high-performance, programmable Ethernet solution that adapts to evolving standards. Because the Pollara 400 artificial intelligence NIC targets open, standards-based Ethernet, including OCP 3.0 form factors and interoperability with a wide range of switches and optics, AMD argues that customers can scale artificial intelligence infrastructure while preserving choice, with the NIC’s programmability offering a path to future transport protocols and optimizations as industry standards advance.

55

Impact Score

Treasury outlines artificial intelligence resources for financial sector oversight

The U.S. Department of the Treasury has created a dedicated hub detailing how it uses artificial intelligence in the financial sector, aggregating policy documents, internal strategies, and alerts on emerging risks such as deepfake-enabled fraud. The page is positioned as a central repository for Treasury-developed reports, use cases, and guidance shaping the government’s approach to artificial intelligence in financial services.

Why tracking Artificial Intelligence assistant traffic is critical for small businesses

Small businesses are facing a shift as customers increasingly rely on Artificial Intelligence assistants for local recommendations and research, making visibility in these tools as important as traditional search. The article explains why tracking Artificial Intelligence assistant traffic as its own channel and investing in search optimization can significantly improve a company’s reach and accuracy.

Configuring language models in opencode

Opencode uses the Artificial Intelligence sdk and Models.dev to connect to more than 75 large language model providers, with support for both cloud and local models. Users can choose recommended models, set defaults, configure options, and define variants through a central config file.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.