AMD details Ryzen artificial intelligence 400G desktop APUs with 4P+4C Zen 5 design

AMD is introducing Ryzen artificial intelligence 400 series desktop APUs for Socket AM5, built on new 4 nm Gorgon Point silicon with Zen 5 cores and an XDNA 2 NPU that targets Microsoft Copilot+ systems.

AMD is introducing the Ryzen artificial intelligence 400 series desktop APUs for the Socket AM5 platform as the successor to the Ryzen 8000G series known as Phoenix Point. Phoenix Point is based on the Zen 4 microarchitecture, while the new Ryzen artificial intelligence 400 lineup is built on Gorgon Point silicon powered by the Zen 5 microarchitecture. A major shift in this generation is a stronger focus on artificial intelligence performance, with the silicon integrating an XDNA 2 neural processing unit that is specified to deliver 50 TOPS of throughput, which makes the Ryzen artificial intelligence 400 series the first socketed desktop processor family that meets Microsoft Copilot+ requirements.

The internal CPU layout also changes significantly from the previous desktop APU generation. Phoenix Point used a single CCX configuration with up to eight full-sized Zen 4 cores, whereas Gorgon Point returns to a dual CCX design for its CPU complex. The first CCX contains four full-sized Zen 5 cores that can boost up to the maximum rated speed of each APU model, and it includes 8 MB of L3 cache that is shared across those four Zen 5 cores. The second CCX is populated with four compact Zen 5c cores, which are also backed by an 8 MB L3 cache shared among them, creating a 4P+4C setup aimed at balancing performance and efficiency.

Zen 5c is described as a physically compacted version of Zen 5 that maintains identical IPC and ISA support, but it is limited to roughly two-thirds of the maximum boost frequency available to the full-sized Zen 5 cores. This arrangement means workloads can be scheduled across performance and compact cores while retaining architectural parity, with frequency headroom as the main differentiator. When threads move between the two CCX complexes, their instructions and data must traverse the chip’s Infinity Fabric interconnect, similar to the behavior seen in earlier Zen 2 designs, which may influence latency characteristics and scheduling strategies for mixed-core workloads.

62

Impact Score

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.