Cadence tapes out 64 Gbps UCIe chiplet interconnect on TSMC N3P

Cadence has taped out its third-generation Universal Chiplet Interconnect Express solution on TSMC's N3P node, targeting high-bandwidth, energy-efficient chiplet designs for advanced Artificial Intelligence, high-performance computing, and data center workloads.

Cadence has taped out its third-generation Universal Chiplet Interconnect Express IP solution, which achieves 64 Gbps per-lane speeds on the TSMC N3P process. The company positions this UCIe implementation as an enabler for the next wave of chiplet innovation, as system architects pursue more complex Artificial Intelligence, high-performance computing, and data center architectures. Cadence presents this milestone as placing the company at the forefront of scalable, energy-efficient multi-die systems for demanding workloads.

The announcement highlights that, as process nodes advance to 3 nm and below, system-on-chip designers must balance power, performance, and area with stringent requirements for high-speed, reliable die-to-die communication. Cadence states that its UCIe IP solution is fully compliant with the UCIe specification and is designed specifically to address these design challenges. By focusing on standards compliance, the offering targets interoperability needs across chiplet-based ecosystems.

Cadence emphasizes that leveraging TSMC’s N3P technology enables this UCIe IP to deliver excellent power efficiency, which is intended to help customers meet aggressive energy budgets without sacrificing performance. The combination of per-lane speed at 64 Gbps and claims of strong power characteristics is aimed at designers building multi-die systems in areas such as Artificial Intelligence accelerators, high-performance computing platforms, and data center infrastructure. The company frames this tape-out as a key step toward robust, high-bandwidth chiplet connectivity on leading-edge process technology.

54

Impact Score

Nvidia positioned for $300 billion upside from meta artificial intelligence buildout

Meta is planning a massive new data center footprint for artificial intelligence workloads, and Nvidia could capture an estimated $300 billion in revenue from the buildout if projections hold. The scale highlights how big tech’s artificial intelligence race is reshaping energy use, infrastructure, and investor expectations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.