AMD introduces Pensando Pollara 400 Artificial Intelligence NIC-ready server platforms

AMD is launching AMD Pensando Pollara 400 Artificial Intelligence NIC-Ready Server Platforms: partner-built systems preconfigured with the AMD Pensando Pollara 400 Artificial Intelligence Network Interface Card to provide high-performance, Ethernet-based Artificial Intelligence networking out of the box. The platforms combine proven server designs, AMD compute, and Pollara 400's fully programmable 400G Ethernet to help organizations deploy scalable Artificial Intelligence clusters faster.

AMD has introduced AMD Pensando Pollara 400 Artificial Intelligence NIC-Ready Server Platforms, a growing ecosystem of server systems from leading partners that ship preconfigured with the AMD Pensando Pollara 400 Artificial Intelligence Network Interface Card. The packages are designed to deliver high-performance, Ethernet-based Artificial Intelligence networking out of the box for both front-end and back-end use cases. By pairing proven server designs with AMD compute and the Pollara 400’s fully programmable 400G Ethernet, AMD says customers can accelerate deployment and reduce integration risk when standing up scalable Artificial Intelligence clusters.

The new platforms unify a consistent networking foundation across a broad partner ecosystem. Systems can be offered as dense GPU training nodes or high-throughput inference servers and can often combine AMD EPYC Server CPUs, AMD Instinct GPU accelerators, and AMD Pensando Pollara 400 Artificial Intelligence NIC-based Ethernet fabrics. That combination is intended to address the heavy communication cycles and unique traffic patterns of modern Artificial Intelligence workloads, providing a repeatable hardware and network architecture for both training and inference clusters.

A key differentiator is programmability. Unlike other AI NICs, the AMD Pensando Pollara 400 Artificial Intelligence NIC is described as fully hardware and software programmable, enabling updates without a hardware overhaul as transport and congestion-control algorithms evolve. That capability allows the same server platform to be tuned over time for new Artificial Intelligence workloads, shifting business priorities, and changing topologies, while giving partners and customers a prevalidated starting point for building out Ethernet-based Artificial Intelligence networking at scale.

52

Impact Score

Apple plans Intel 18A-P for M7 and 14A for A21

Apple is expected to use Intel’s 18A-P process for M7 chips in MacBook models and Intel’s 14A process for A21 chips in iPhones. The shift points to a broader supplier strategy as Apple moves beyond TSMC for parts of its future silicon roadmap.

Google and other chatbots surface real phone numbers

Generative Artificial Intelligence chatbots are surfacing real phone numbers and other personal details, sometimes by pulling from obscure public sources and sometimes by inventing plausible but wrong contact information. Privacy experts say users have few reliable ways to find out whether their data is in model training sets or to force its removal.

U.S. and China revisit Artificial Intelligence emergency talks

Washington and Beijing are exploring renewed talks on an emergency communication channel for Artificial Intelligence as fears grow over the capabilities of Anthropic’s Mythos model. The shift reflects rising concern in both capitals that competitive pressure is outpacing safeguards.

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.