AMD Launches Pensando Pollara 400 AI NIC to Accelerate Scalable Artificial Intelligence Workloads

AMD unveils the Pensando Pollara 400, a fully programmable network card designed to optimize large-scale Artificial Intelligence and machine learning deployments.

AMD has officially announced the general availability and shipping of its innovative Pensando Pollara 400 AI NIC, targeting the accelerating demands of Artificial Intelligence, large language models, and agentic Artificial Intelligence applications. The surge in these advanced workloads is prompting data centers and enterprises to seek parallel computing infrastructures that not only offer superior performance but are also adaptable to evolving Artificial Intelligence and machine learning needs. A critical technological challenge in this context is scaling intra-node GPU-GPU communication networks for maximum efficiency.

Embedding its commitment to open ecosystems and customer choice, AMD designed the Pensando Pollara 400 AI NIC as a fully programmable network interface controller aligned with the nascent standards defined by the Ultra Ethernet Consortium (UEC). This new offering brings the promise of reducing total cost of ownership by enabling flexible, future-proof data center architectures without compromising on performance. The Pollara 400 is engineered to provide the scalable, high-throughput connectivity required for Artificial Intelligence and machine learning clusters, making it easier for organizations to build and expand robust Artificial Intelligence infrastructure.

The launch signifies AMD’s strategic focus on both innovation and open standards across Artificial Intelligence networking. By supporting high-speed, programmable, and UEC-compliant network capabilities, the Pensando Pollara 400 is positioned as a pivotal component for next-generation Artificial Intelligence data centers. With the product now available and shipping to customers, AMD is reinforcing its role in accelerating Artificial Intelligence deployment at scale, ensuring organizations can meet both current and future infrastructure demands for increasingly complex and resource-intensive Artificial Intelligence workloads.

71

Impact Score

SK Hynix warns of tight commodity DRAM supply through 2028

SK Hynix expects tight supply of commodity DRAM such as DDR5, GDDR6, and LPDDR5x to persist through 2028, putting gamers and PC buyers at risk of higher memory prices, while advanced HBM and SOCAMM lines continue to expand capacity for Artificial Intelligence hardware.

Artificial Intelligence transforms scientific research with ethical safeguards

Artificial Intelligence is reshaping scientific research through autonomous labs, hypothesis-generating systems, and cross-disciplinary applications, while sparking parallel efforts to build ethical and governance frameworks. The article tracks how industry, academia, and governments are trying to balance rapid advances with quality control, transparency, and safety.

From bytes to bedside: artificial intelligence in medicine and medical education

A new clinical obstetrics and gynecology article argues that rapidly advancing generative artificial intelligence and large language models are set to reshape both patient care and medical training, while stressing the need for ethical and safe implementation. The authors describe how these systems are already demonstrating clinical reasoning capabilities and propose a framework for integrating them responsibly into health care and education.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.