AMD expands Ryzen Artificial Intelligence embedded P100 processors for edge workloads

AMD is extending its Ryzen Artificial Intelligence Embedded P100 Series with higher core counts, more graphics compute, and increased system tera operations to target demanding edge and robotics use cases.

Factory automation, physical Artificial Intelligence in mobile robotics, and other Artificial Intelligence driven edge applications are rapidly evolving and increasing demand for computing platforms that can deliver real-time Artificial Intelligence processing, deterministic performance, and long-term reliability in always-on environments. These use cases require consistent low latency responses and the ability to run complex models at the edge without relying on cloud connectivity.

To address these requirements, AMD is expanding its AMD Ryzen Artificial Intelligence Embedded P100 Series processor portfolio with new models positioned for next generation edge deployments. The new processors feature up to 2x higher CPU core counts, which is intended to support more demanding multitasking and control workloads in industrial and robotics systems. By scaling compute resources within the same family, AMD aims to give system designers additional headroom while maintaining an embedded focused platform.

The updated Ryzen Artificial Intelligence Embedded P100 Series also delivers up to 8x higher graphics processing unit (GPU) compute, which targets graphics intensive and parallel Artificial Intelligence workloads that run locally on edge devices. In addition, the new processors provide an estimated 36% higher system tera operations, which is designed to accelerate Artificial Intelligence inference performance for real-time decision making. Together, these gains are aimed at enabling more capable, always-on edge systems across factory automation, autonomous mobile robots, and other physical Artificial Intelligence applications that depend on sustained, deterministic processing.

50

Impact Score

Business anniversary meets Artificial Intelligence arrival

A reflection on nine years of building a digital local business newsroom turns into a broader assessment of how generative Artificial Intelligence is already reshaping professional communication and editorial work. The piece balances optimism about productivity gains with concern over overreliance on automated thinking.

Penguin Solutions launches cxl-based kv cache server

Penguin Solutions introduced a production-ready KV cache server built on CXL memory technology for enterprise-scale inference and agentic Artificial Intelligence workloads. The system is positioned to ease memory bottlenecks, improve GPU cluster efficiency, and reduce latency.

Micron completes Tongluo P5 site acquisition in Taiwan

Micron has completed its acquisition of PSMC’s P5 site in Tongluo, Miaoli County, Taiwan. The facility adds cleanroom capacity near Micron’s Taichung campus and is intended to support more leading-edge DRAM and HBM output for growing Artificial Intelligence-driven demand.

Qwen3.5 recipes shared for Jetson Thor

A Jetson Thor forum post shares setup recipes for running multiple Qwen3.5 models with NVIDIA’s latest vllm repository for Thor. The largest reported working model is Qwen3.5-122B-A10B, with notes on NVFP4 and INT4 tradeoffs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.