Marvell unveils 64 Gbps bi-directional die-to-die interface IP in 2 nm for next-generation XPUs

Marvell introduced a 2 nm 64 Gbps bi-directional die-to-die interface IP that delivers 32 Gbps of simultaneous two-way connectivity per wire to boost XPU bandwidth while reducing power and die area. The IP is also available in 3 nm and includes adaptive power management to cut interface power consumption.

Marvell Technology announced what it describes as the industry´s first 2 nm 64 Gbps bi-directional die-to-die (D2D) interconnect IP, aimed at improving bandwidth and performance for next-generation XPUs and data center designs. The interface provides 32 Gbps in each direction over a single physical wire, enabling simultaneous two-way connectivity. Marvell said the IP is also available in a 3 nm version and is intended to meet scaling demands in high-performance compute environments.

The company highlighted several technical metrics to quantify the new IP´s advantages. Marvell reported a bandwidth density greater than 30 Tbps per square millimeter, which it said is more than three times the bandwidth density of UCIe at equivalent speeds. A minimal depth configuration is claimed to reduce compute die area requirements to 15% compared to conventional implementations. The interface also incorporates advanced adaptive power management that automatically adjusts device activity to respond to bursty data center traffic, reducing interface power consumption by up to 75% with normal workloads and up to 42% during peak traffic periods.

Marvell positioned the D2D IP as setting a new standard for performance, power efficiency, and resiliency for chip designers targeting XPUs and next-generation data centers. The announcement focuses on technical gains in bandwidth density, silicon area savings and dynamic power reduction as primary benefits for designers aiming to pack more bandwidth into smaller die areas. Availability timelines, pricing and partner or customer details were not stated. Not stated.

72

Impact Score

Intel unveils massive artificial intelligence processor test vehicle showcasing advanced packaging

Intel Foundry has revealed an experimental artificial intelligence chip test vehicle that uses an 8 reticle-sized package with multiple logic and memory tiles to demonstrate its latest manufacturing and packaging capabilities. The design highlights how Intel intends to build next-generation multi-chiplet artificial intelligence and high performance computing processors with advanced interconnects and power delivery.

Reward models inherit value biases from large language model foundations

New research shows that reward models used to align large language models inherit systematic value biases from their pre-trained foundations, with Llama and Gemma models diverging along agency and communion dimensions. The work raises fresh safety questions about treating base model choice as a purely technical performance decision in Artificial Intelligence alignment pipelines.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.