AMD used its 2026 International CES event to show off the upcoming Epyc Venice enterprise processor, with CEO Dr Lisa Su highlighting the chip as the compute heart of the company’s 2026 Helios artificial intelligence racks that are paired with AMD MI455X artificial intelligence GPUs. Each node in the rack features four MI455X GPUs, and one Epyc Venice processor with a 256-core/512-thread configuration, underscoring that this platform is tuned for dense, highly parallel data center workloads.
The Venice package adopts a markedly different chiplet layout from current generation Epyc designs, signaling a substantial rethink of how AMD organizes compute and I/O on its flagship server silicon. The package features a significantly different layout of chiplets compared to current Epyc chips, and it sees two slender, centralized server I/O dies built on the 4 nm node, which are flanked on either sides by no more than eight CCDs, each built on the 2 nm foundry node, and packing 32 Zen 6 cores. AMD has not yet clarified whether these cores are full sized Zen 6 implementations capable of maintaining high clock speeds, or compact Zen 6c variants that keep identical ISA and IPC characteristics but trade off some maximum frequency for efficiency.
Memory and connectivity also get a substantial upgrade to feed both the Venice compute complex and the quartet of MI455X accelerators in each Helios node. Each Venice package features a 16-channel DDR5 memory interface (32 sub-channels), which explains why AMD probably needed to disaggregate the sIOD into two chips that are joined at the hip using high-speed switching fabric. In parallel, AMD is also expected to significantly increase the PCIe and CXL lane counts of Venice over the current-generation to support those four artificial intelligence GPUs, DPUs, and 800G NICs, indicating a platform built expressly for next wave accelerated computing deployments.
