AMD reveals open artificial intelligence ecosystem vision and new rack-scale infrastructure

AMD unveils a comprehensive open ecosystem for Artificial Intelligence, featuring advanced accelerators and scalable infrastructure, during its Advancing AI 2025 event.

AMD has outlined its ambitious strategy for an open, integrated artificial intelligence ecosystem at its Advancing AI 2025 event, marking a pivotal moment in the company’s campaign to become a central player in large-scale Artificial Intelligence infrastructure. The announcement features a full-stack approach, integrating novel silicon, robust software, and scalable rack-scale systems, all built around industry standards and open architectures.

Central to AMD’s showcase are the new AMD Instinct MI350 Series accelerators, which serve as the backbone for performance-driven artificial intelligence workloads. This hardware push is paired with the expansion of the ROCm ecosystem, AMD’s open software platform enabling developers to more easily harness the full capabilities of its hardware for machine learning and artificial intelligence applications. Through partnerships and ecosystem growth, AMD aims to provide organizations with greater flexibility and interoperability, fostering collaboration rather than vendor lock-in.

Another highlight is AMD’s roadmap for open rack-scale designs, targeting leadership performance in artificial intelligence through and beyond 2027. These open, scalable systems reflect AMD’s long-term strategy to accommodate rapidly evolving artificial intelligence requirements across data centers worldwide. Altogether, AMD’s propositions at Advancing AI 2025 signal a future where open standards and collaborative development set the pace for artificial intelligence infrastructure, challenging closed alternatives and emphasizing multi-vendor innovation.

74

Impact Score

NVIDIA may stop bundling memory with gpu kits amid gddr shortage

NVIDIA is reportedly considering supplying only bare silicon to its aic partners rather than the usual gpu and memory kit as gddr shortages constrain fulfillment. The move follows wider industry pressure from soaring dram prices and an impending price increase from AMD of about 10% across its gpu lineup.

SK Hynix to showcase 48 Gb/s 24 Gb GDDR7 for Artificial Intelligence inference

SK Hynix will present a 24 Gb GDDR7 chip rated for 48 Gb/s at ISSCC 2026, claiming a symmetric dual-channel design and updated internal interfaces that push past the expected 32 to 37 Gb/s. The paper positions the device for mid-range Artificial Intelligence inference and SK Hynix will also show LPDDR6 running at 14.4 Gb/s.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.