AMD is introducing the Ryzen artificial intelligence 400 series desktop APUs for the Socket AM5 platform as the successor to the Ryzen 8000G series known as Phoenix Point. Phoenix Point is based on the Zen 4 microarchitecture, while the new Ryzen artificial intelligence 400 lineup is built on Gorgon Point silicon powered by the Zen 5 microarchitecture. A major shift in this generation is a stronger focus on artificial intelligence performance, with the silicon integrating an XDNA 2 neural processing unit that is specified to deliver 50 TOPS of throughput, which makes the Ryzen artificial intelligence 400 series the first socketed desktop processor family that meets Microsoft Copilot+ requirements.
The internal CPU layout also changes significantly from the previous desktop APU generation. Phoenix Point used a single CCX configuration with up to eight full-sized Zen 4 cores, whereas Gorgon Point returns to a dual CCX design for its CPU complex. The first CCX contains four full-sized Zen 5 cores that can boost up to the maximum rated speed of each APU model, and it includes 8 MB of L3 cache that is shared across those four Zen 5 cores. The second CCX is populated with four compact Zen 5c cores, which are also backed by an 8 MB L3 cache shared among them, creating a 4P+4C setup aimed at balancing performance and efficiency.
Zen 5c is described as a physically compacted version of Zen 5 that maintains identical IPC and ISA support, but it is limited to roughly two-thirds of the maximum boost frequency available to the full-sized Zen 5 cores. This arrangement means workloads can be scheduled across performance and compact cores while retaining architectural parity, with frequency headroom as the main differentiator. When threads move between the two CCX complexes, their instructions and data must traverse the chip’s Infinity Fabric interconnect, similar to the behavior seen in earlier Zen 2 designs, which may influence latency characteristics and scheduling strategies for mixed-core workloads.
