AMD Medusa Halo APU leak reveals up to 24 cores and 48 RDNA 5 CUs

A leak from Moore´s Law is Dead outlines AMD´s Medusa Halo as a 2027 top-tier APU with Zen 6 chiplets, up to 24 CPU cores and 48 RDNA 5 compute units, and new memory controller options.

A fresh leak from Moore´s Law is Dead describes AMD´s Medusa Halo APU as the company´s top-of-the-line chip planned for 2027 and rejects earlier rumors that the project was cancelled. The report says the design will rely on Zen 6 CPU chiplets manufactured on TSMC´s N2P process, while the input output die will use TSMC´s N3P process. The leak positions Medusa Halo as a multi-chiplet APU aimed at significantly increasing both CPU and GPU capability compared with current integrated designs.

On the CPU side, the base Medusa Halo configuration reportedly combines 12 Zen 6 cores plus two power efficient Zen 6 LP cores. High-end variants are said to add an additional 12-core Zen 6 CCD, delivering up to 24 CPU cores and potentially up to 26 cores when including the LP cores. The combination of N2P chiplets and an N3P I/O die reflects AMD´s use of different process nodes for compute and connectivity duties in its future APU strategy.

Graphics and memory enhancements are a focal point of the leak. Medusa Halo is said to feature 48 compute units based on RDNA 5 and 20 megabytes of L2 cache, a notable increase over the 40 CUs in the current Strix Halo APU. The leak suggests the integrated GPU could deliver performance near an NVIDIA GeForce RTX 5070 Ti. Memory support is reported as either a 384-bit LPDDR6 controller or a 256-bit LPDDR5X controller, both intended to provide the high bandwidth needed to feed the boosted GPU. Together, the CPU, GPU, and memory changes portray Medusa Halo as a substantial step up for built-in graphics and hybrid chip designs.

72

Impact Score

Red Hat Artificial Intelligence 3 tackles inference complexity

Red Hat introduced Red Hat Artificial Intelligence 3 to move enterprise models from pilots to production, with a strong focus on scalable inference on Kubernetes. The release adds llm-d, a unified API on Llama Stack, and tools for Model-as-a-Service delivery.

Nvidia DGX Spark arrives for world’s Artificial Intelligence developers

Nvidia is shipping DGX Spark, a compact desktop system that delivers a petaflop of Artificial Intelligence performance and unified memory to bring large model development and agent workflows on premises. Partner systems from major PC makers and channel partners broaden availability starting Oct. 15.

EU regulatory developments on the Artificial Intelligence Act

The European Commission finalized a General Purpose Artificial Intelligence Code of Practice and signaled phased enforcement of the Artificial Intelligence Act. Companies gain transitional breathing room but should use it to align with new transparency, copyright, and safety expectations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.