AMD adds new Ryzen artificial intelligence Max+ Strix Halo chips with boosted graphics

AMD has introduced two Ryzen artificial intelligence Max+ Strix Halo processors that trade some cpu cores for stronger integrated graphics, aiming squarely at gaming and copilot plus capable laptops.

AMD expanded its Ryzen artificial intelligence MAX Strix Halo line of super apus with two new models, the Ryzen artificial intelligence Max+ 392 and Ryzen artificial intelligence Max+ 388. Both processors use the existing Strix Halo package, which combines a 16-core/32-thread Zen 5 cpu built from standard Zen 5 ccds with a large i/o die that integrates a powerful integrated gpu based on rdna 3.5 graphics architecture. This graphics block has 40 compute units (2,560 stream processors), is paired with a copilot plus ready npu rated at 50 TOPS, and connects to a unified 256-bit lpddr5x memory interface.

The Ryzen artificial intelligence Max+ 392 is configured with a 12-core/24-thread cpu using two ccds, clock speeds of 3.20 GHz base and 5.00 GHz maximum boost, a full-featured integrated gpu with all 40 CU enabled, a 50 TOPS npu, and a configurable tdp range of 45 W to 120 W. This configuration keeps the complete graphics and neural processing capabilities of the platform while slightly trimming cpu core count from the full Strix Halo implementation. The design targets users who want desktop class graphics and artificial intelligence acceleration in a mobile or compact form factor without sacrificing boost frequencies.

The Ryzen artificial intelligence Max+ 388 features an 8-core/16-thread cpu using a single ccd, which ticks at 3.60 GHz base with 5.00 GHz boost, again, the fully unlocked integrated gpu with all 40 CU enabled, the 50 TOPS npu, and 45 W to 120 W cTDP. AMD positions both chips as distinct within the broader Strix Halo lineup by emphasizing increased graphics power at the expense of cpu core count. According to the company, this balance is intended to make the new processors particularly attractive for gaming focused systems and other graphics heavy workloads at lower price points compared with higher core count Strix Halo variants.

55

Impact Score

Adaptive training method boosts reasoning large language model efficiency

Researchers have developed an adaptive training system that uses idle processors to train a smaller helper model on the fly, doubling reasoning large language model training speed without sacrificing accuracy. The method aims to cut costs and energy use for advanced applications such as financial forecasting and power grid risk detection.

How to run MiniMax M2.5 locally with Unsloth GGUF

MiniMax-M2.5 is a new open large language model optimized for coding, tool use, search, and office tasks, and Unsloth provides quantized GGUF builds and usage recipes for running it locally. The guide focuses on memory requirements, recommended decoding parameters, and deployment via llama.cpp and llama-server with an OpenAI-compatible interface.

Y Combinator backs new wave of computer vision startups in 2026

Y Combinator’s 2026 computer vision cohort spans infrastructure, developer tools, and industry-specific applications from retail security to aquaculture and healthcare. Startups are increasingly pairing computer vision with large vision language models and foundation models to tackle real-time video, automation, and domain-specific analysis.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.