AMD is promoting a new ‘Agent Computer’ concept centered on running artificial intelligence agents locally on Windows hardware rather than in remote data centers. The company has released a guide for deploying OpenClaw through two hardware configurations called ‘RyzenClaw’ and ‘RadeonClaw,’ both designed around AMD silicon to keep artificial intelligence agent workloads completely off the cloud. The initiative is framed around user and business demands for control over their data, affordable always-on artificial intelligence with no usage limits, and assurance that their models execute on their own infrastructure instead of a third party.
The reference setup uses WSL2 on Windows, with LM Studio handling local large language model inference via llama.cpp. It also supports Memory.md using local embeddings so there is no cloud dependency for context storage or retrieval. AMD states that the environment can be configured in under an hour, positioning it for early adopters and developers who want to experiment with personal artificial intelligence agents. The company describes these agent-focused PCs as a progression beyond conventional artificial intelligence PCs that primarily accelerate individual inference tasks.
The RyzenClaw configuration is built around a Ryzen AI Max+ system with 128 GB of unified memory, and AMD specifically recommends reserving 96 GB as variable graphics memory for artificial intelligence agents. Running the Qwen 3.5 35B A3B model, that configuration delivers around 45 tokens per second, processes 10,000 input tokens in roughly 19.5 seconds, supports a 260K token context window, and can run up to six agents concurrently. AMD presents this as a way to explore ‘agent swarm’ behavior on consumer-grade hardware. The RadeonClaw path pairs OpenClaw with the Radeon AI PRO R9700, a workstation-class GPU with 32 GB of VRAM, and that setup is considerably faster, around 120 tokens per second with the same model, and 10,000 input tokens processed in about 4.4 seconds. The RadeonClaw tradeoff is a smaller 190K token context window and support for only two concurrent agents, compared to six on the Ryzen AI Max+ path.
