AMD Instinct MI500 UAL256 mega pod to scale to 256 GPUs and 64 Verano CPUs for artificial intelligence and high-performance computing

AMD is preparing a second-generation rack-scale system, the Instinct MI500 UAL256 mega pod, that pairs one Verano CPU with four Instinct MI500 accelerators per node to support advanced Artificial Intelligence and high-performance computing workloads.

AMD has begun supply chain preparations for a second-generation rack-scale system, the Instinct MI500 UAL256, according to SemiAnalysis. Each compute node in the design pairs a single Verano CPU with four Instinct MI500 accelerators. The architecture supports up to 64 nodes split across two 32-node racks, yielding 256 logical GPUs and 64 CPUs per mega pod for advanced Artificial Intelligence and high-performance computing workloads.

The mega pod splits compute into two 32-rack cabinets linked by a single central rack that hosts networking hardware. That central rack contains 18 switching trays, each populated with four 102.4T Vulcano switch ASICs. Those Vulcano ASICs are designed for 800G external throughput and are manufactured on TSMC´s 3 nm node, according to the report. AMD positions the UAL256 as a scale-up system that follows the Instinct MI450X IF128, also known as Helios, which is scheduled for the second half of 2026.

AMD disclosed limited details about the Verano central processor during its Advancing Artificial Intelligence event. Verano is said to introduce the Zen 7 microarchitecture with higher instructions per cycle and support for newer instruction sets. It is reportedly expected to retain the Socket SP7 infrastructure, suggesting it will keep memory and PCIe interfaces introduced in 2026, including PCIe Gen 6, UALink, and Ultra Ethernet. The company did not confirm whether Verano will increase core counts beyond the up to 256 cores per package cited for Venice. The Instinct MI500 UAL256 mega pod is slated to arrive in 2027, timed to align with Verano and the Instinct MI500 accelerator.

72

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.