AMD details Ryzen artificial intelligence 400G desktop APUs with 4P+4C Zen 5 design

AMD is introducing Ryzen artificial intelligence 400 series desktop APUs for Socket AM5, built on new 4 nm Gorgon Point silicon with Zen 5 cores and an XDNA 2 NPU that targets Microsoft Copilot+ systems.

AMD is introducing the Ryzen artificial intelligence 400 series desktop APUs for the Socket AM5 platform as the successor to the Ryzen 8000G series known as Phoenix Point. Phoenix Point is based on the Zen 4 microarchitecture, while the new Ryzen artificial intelligence 400 lineup is built on Gorgon Point silicon powered by the Zen 5 microarchitecture. A major shift in this generation is a stronger focus on artificial intelligence performance, with the silicon integrating an XDNA 2 neural processing unit that is specified to deliver 50 TOPS of throughput, which makes the Ryzen artificial intelligence 400 series the first socketed desktop processor family that meets Microsoft Copilot+ requirements.

The internal CPU layout also changes significantly from the previous desktop APU generation. Phoenix Point used a single CCX configuration with up to eight full-sized Zen 4 cores, whereas Gorgon Point returns to a dual CCX design for its CPU complex. The first CCX contains four full-sized Zen 5 cores that can boost up to the maximum rated speed of each APU model, and it includes 8 MB of L3 cache that is shared across those four Zen 5 cores. The second CCX is populated with four compact Zen 5c cores, which are also backed by an 8 MB L3 cache shared among them, creating a 4P+4C setup aimed at balancing performance and efficiency.

Zen 5c is described as a physically compacted version of Zen 5 that maintains identical IPC and ISA support, but it is limited to roughly two-thirds of the maximum boost frequency available to the full-sized Zen 5 cores. This arrangement means workloads can be scheduled across performance and compact cores while retaining architectural parity, with frequency headroom as the main differentiator. When threads move between the two CCX complexes, their instructions and data must traverse the chip’s Infinity Fabric interconnect, similar to the behavior seen in earlier Zen 2 designs, which may influence latency characteristics and scheduling strategies for mixed-core workloads.

62

Impact Score

OpenAI strikes Pentagon deal that sidelines anthropic’s harder line

OpenAI has reached a rushed agreement to let the US military use its Artificial Intelligence models in classified settings, relying on existing law and internal safety rules rather than hard contractual bans on controversial uses. The deal contrasts sharply with Anthropic’s failed attempt to impose explicit red lines on autonomous weapons and mass surveillance, which triggered a fierce backlash from the Pentagon.

Texas campaigns weaponize artificial intelligence in 2026 primary ads

Texas primary races are turning into test beds for artificial intelligence, with candidates using everything from crude parodies to sophisticated deepfakes while operating in a regulatory vacuum. Fact checkers warn the growing flood of synthetic images and videos could desensitize voters and blur the line between satire and deception.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.