AMD ´Venice´ EPYC CPU identifiers surface in Linux kernel update

New identifiers tied to AMD´s ´Venice´ EPYC processors—touted as the first server-grade chips on TSMC´s 2 nm technology—have surfaced in Linux kernel updates, hinting at a leap in data center computing power.

Recent investigations by data mining group InstLatX64 have unveiled a set of new CPU identifiers within Linux kernel updates, believed to be linked to AMD´s anticipated next-generation EPYC processors. Following last month’s exposure of unreleased ´Zen 5´ processor families, the spotlight turned to identifiers including ´B50F00, B90F00, BA0F00, and BC0F00´, which were detected in developments aimed at integrating support for ´Zen 6´ technologies. Notably, the ´B50F00´ ID, associated with the internal codename ´Weisshorn,´ is widely interpreted as the mainline successor to today’s EPYC ´Turin´ platform, itself known by the moniker ´Breithorn´. These discoveries have fueled speculation about AMD´s direction for high-performance server silicon.

Momentum around the ´Venice´ codename intensified after AMD’s mid-April 2025 announcement, where Dr. Lisa Su confirmed the chip as the industry´s first high-performance computing processor to be ´taped out and brought up´ using TSMC’s advanced 2 nm process technology. This positions ´Venice´ as a significant milestone both for AMD and for the broader data center compute ecosystem, which is increasingly reliant on process advancements to drive performance and energy efficiency. According to AMD’s published CPU roadmap, the EPYC ´Venice´ lineup is scheduled for a 2026 launch, drawing industry attention to its technical evolution and market readiness.

Leaks reported a month prior outlined possible configurations of the upcoming chips, suggesting that ´Venice´ may feature a hybrid core arrangement—blending standard ´Zen 6´ with the high-density ´Zen 6C´ cores. Technical details from these leaks propose scalability to as many as 256 cores and 512 threads, backed by up to 1 GB of L3 cache per socket, marking a dramatic leap over previous generations. Additional identifiers in recent Linux code, coupled with speculation by watchdogs like InstLatX64, suggest the potential debut of new ´Venice-Dense´ server variants and possible tie-ins with unannounced Instinct accelerators—further expanding AMD´s compute ecosystem for high-demand data center and Artificial Intelligence workloads.

72

Impact Score

Red Hat Artificial Intelligence 3 tackles inference complexity

Red Hat introduced Red Hat Artificial Intelligence 3 to move enterprise models from pilots to production, with a strong focus on scalable inference on Kubernetes. The release adds llm-d, a unified API on Llama Stack, and tools for Model-as-a-Service delivery.

Nvidia DGX Spark arrives for world’s Artificial Intelligence developers

Nvidia is shipping DGX Spark, a compact desktop system that delivers a petaflop of Artificial Intelligence performance and unified memory to bring large model development and agent workflows on premises. Partner systems from major PC makers and channel partners broaden availability starting Oct. 15.

EU regulatory developments on the Artificial Intelligence Act

The European Commission finalized a General Purpose Artificial Intelligence Code of Practice and signaled phased enforcement of the Artificial Intelligence Act. Companies gain transitional breathing room but should use it to align with new transparency, copyright, and safety expectations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.