Broadcom reveals Tomahawk 6 switch chip series with 102.4 Tbps bandwidth

Broadcom debuts the Tomahawk 6 chip series, providing record-breaking 102.4 Tbps bandwidth for next-generation Artificial Intelligence networks.

Broadcom Inc. has ushered in a new era of high-performance networking by launching the Tomahawk 6 switch series, which boasts a staggering 102.4 terabits per second (Tbps) of switching capacity in a single chip. This achievement doubles the bandwidth of any currently available Ethernet switch solution, positioning the Tomahawk 6 at the forefront of infrastructure for Artificial Intelligence data centers and cloud-scale deployments.

Engineered for the demands of massive scale-up and scale-out Artificial Intelligence networks, Tomahawk 6 integrates support for both 100G and 200G SerDes interfaces alongside co-packaged optics (CPO) technology, enhancing flexibility for diverse deployment scenarios. The platform comes equipped with an extensive set of adaptive routing features as well as interconnect options that are optimized for networking clusters containing more than one million XPUs. Broadcom describes the chip’s energy efficiency and routing agility as essential enablers for the development and operation of large, distributed Artificial Intelligence training and inference clusters.

Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group, characterized Tomahawk 6 as a transformative leap rather than a conventional upgrade. According to Velaga, the chip merges record-breaking bandwidth, advanced power efficiency, and sophisticated adaptive routing within a unified architecture, promising to accelerate the deployment and advancement of large-scale Artificial Intelligence clusters across industries. Early market demand for the Tomahawk 6 series has reportedly exceeded expectations, signaling its potential to become a backbone technology for next-generation Artificial Intelligence infrastructure.

77

Impact Score

Key large language model papers from October 13 to 18

A roundup of notable large language model research from the third week of October 2025, spanning generative modeling, multimodal embeddings, and evaluation. Highlights include a diffusion transformer built on representation autoencoders and a language-centric scaling law for embeddings.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.