AMD and Stability AI launch local stable diffusion 3.0 image generation on NPU laptops

AMD teams up with Stability AI to deliver high-quality, local Artificial Intelligence image generation on Ryzen-powered laptops, eliminating GPU dependency.

AMD and Stability AI have unveiled a significant collaboration to adapt Stable Diffusion 3.0 Medium for Stability Amuse, a creative Artificial Intelligence art platform. By refining the model architecture to operate efficiently on Neural Processing Units (NPUs), the partnership enables enhanced image generation and improved text processing directly on devices equipped with AMD´s latest Ryzen AI XDNA 2 NPUs. This development builds on technology showcased at Computex 2024, where AMD and Stability AI introduced SDXL Turbo, the world´s first block FP16 stable diffusion model, mixing FP16 accuracy with performance closer to INT8 compute levels.

The new block FP16 SD 3 Medium model delivers superior picture quality with considerably reduced memory requirements compared to previous iterations, functioning smoothly on laptops with 24 GB RAM and consuming just 9 GB during operation. This optimization is crucial, as it makes high-precision, local Artificial Intelligence image generation feasible on mainstream hardware, sidestepping the heavy demands of quantization. A two-stage processing pipeline leverages the XDNA 2 NPU to boost output from 2 megapixels (1024 x 1024) up to 4 megapixels (2048 x 2048), pushing desktop-grade creative output onto portable devices.

Originally, Amuse required GPU acceleration for tasks involving Stable Diffusion Medium, restricting its accessibility to users with dedicated GPUs. Now, the latest Amuse 3.1 release lets users select between GPU and NPU acceleration, broadening hardware compatibility and enabling efficient, local Artificial Intelligence workflows on Ryzen AI-powered laptops. To access this feature, users need to install the latest AMD Adrenalin Driver and Amuse 3.1 Beta, enable HQ mode in EZ Mode, and activate the XDNA 2 Stable Diffusion Offload option. This marks a pivotal step for creative professionals and hobbyists seeking high-resolution image generation without reliance on cloud processing or high-end GPUs.

68

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.