Networking for Artificial Intelligence: Ethernet scale-up and scale-out

Broadcom is shipping new Ethernet silicon to power Artificial Intelligence scale-up and scale-out networks across and between data centers. Jericho4 extends fabrics beyond a single facility, while Tomahawk Ultra and Tomahawk 6 push latency and capacity milestones.

Broadcom outlined an expanded Ethernet portfolio designed to connect increasingly large Artificial Intelligence clusters both within data centers and across facilities. The company is shipping Jericho4 to enable distributed Artificial Intelligence computing beyond single-site limits, alongside Tomahawk Ultra for low-latency scale-up fabrics and Tomahawk 6 for extreme throughput. The announcements position Ethernet as an open, interoperable foundation for high-performance networking in Artificial Intelligence and high performance computing environments.

Jericho4 is presented as a cornerstone for scale-out networks that span multiple locations. Broadcom describes the device as engineered to extend Artificial Intelligence-scale Ethernet fabrics beyond individual data centers, supporting congestion-free RoCE and a 3.2 Tbps HyperPort to improve interconnect efficiency. Messaging on the page highlights scaling to more than one million XPU clusters beyond single facility limits. Broadcom frames Jericho4, Tomahawk Ultra, Tomahawk 6, and its Scale Up Ethernet (SUE) framework as complementary components that enable large distributed computing systems within a rack, across racks, and across data centers in an open and interoperable way.

For scale-up performance, Broadcom is shipping Tomahawk Ultra, which the company says reimagines the Ethernet switch for Artificial Intelligence and HPC workloads. Headline capabilities include ultra-low 250 ns latency, 64-byte line-rate switching, a lossless fabric, and in-network collectives, all intended to elevate Ethernet’s role in scale-up training and inference clusters. According to Broadcom leadership, Tomahawk Ultra reflects a multi-year engineering effort to overhaul every aspect of the switch architecture in pursuit of higher performance networking for Artificial Intelligence scale-up.

Broadcom also announced it is now shipping the Tomahawk 6 switch series, billed as the world’s first 102.4 Tbps switch chip. Positioned for both scale-up and scale-out Artificial Intelligence networks, Tomahawk 6 combines very high bandwidth with power efficiency and adaptive routing features, and includes support for co-packaged optics. A virtual event hosted by theCUBE accompanies the launch, bringing together leaders from Broadcom, Juniper Networks, Arista Networks, and Bloomberg Intelligence to discuss the implications for deploying large Artificial Intelligence clusters. Broadcom characterizes customer demand as unprecedented and expects rapid impact on large-cluster rollouts.

The page also points to additional materials for practitioners building Artificial Intelligence networks. These include expert blogs addressing myths of Artificial Intelligence networking and the role of Ethernet in smarter scale-up designs, as well as a downloadable Scale Up Ethernet Framework white paper. Collectively, the resources and product releases underscore Broadcom’s approach to advancing Ethernet for both rack-scale and data center-scale Artificial Intelligence interconnects.

68

Impact Score

TSMC’s Arizona fabs begin supplying advanced chips to AMD and Nvidia

TSMC’s Arizona fabs have begun producing advanced chips for AMD and Nvidia, marking a milestone for the United States’ push to reshore semiconductor manufacturing under the Chips Act. Nvidia will build Blackwell compute engines and its first United States Artificial Intelligence supercomputers there, while AMD validated its 5th gen Epyc CPUs at the site.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.