TSMC showcases custom C-HBM4E as N3P logic dies target double efficiency

TSMC detailed moves for its HBM4 generation, including a shift of custom C-HBM4E logic dies to N3P and base dies to N12 to cut operating voltages and boost efficiency. The company also outlined packaging roadmaps such as CoWoS-L to support up to 12 HBM stacks for 2026 Artificial Intelligence parts and confirmed customers including Micron and SK Hynix.

At the Open Innovation Platform Ecosystem Forum in Amsterdam, TSMC outlined architecture and node changes for HBM4. The company’s custom C-HBM4E logic die is expected to shift to the N3P node with a voltage change from 0.8 V to 0.75 V, a move TSMC says targets roughly 2× better power efficiency versus today’s DRAM processes. Standard HBM4 base dies will also change process: instead of a conventional DRAM process as used in HBM3E, TSMC plans to manufacture HBM4 base dies on its N12 logic node, reducing operating voltage from 1.1 V to 0.8 V and delivering an expected around 1.5× efficiency gain.

For C-HBM4E the base die not only moves to N3P but also integrates memory controllers directly into the stack. Those controller blocks normally sit on the host SoC, and integrating them into the base die makes the PHY a fully custom design. On packaging, TSMC said it is expanding InFO and SoW options while continuing to rely on CoWoS as the main growth driver. The company has already moved from 1.5× to 3.3× reticle sizes with support for eight HBM chips and is progressing to CoWoS-L, enabling up to 12 HBM3E/HBM4 stacks for 2026 Artificial Intelligence parts, followed by a larger A16 generation version planned for 2027.

TSMC is lining up major customers for its custom HBM logic dies. Micron has selected the foundry to build the logic base die for its HBM4E parts, with volume production planned for 2027. SK Hynix is reportedly preparing its first custom HBM4E products for the second half of next year and will use TSMC’s 12 nm process for mainstream server-grade HBM base dies. TSMC’s roadmap also indicates NVIDIA’s top-end GPUs and Google’s TPUs will step up to a 3 nm node for their highest-end designs.

68

Impact Score

Samsung strike threat raises chip supply risks

A possible labor strike at Samsung Electronics in South Korea is raising concerns about chip production disruptions, client defections, and pressure on its position in the global semiconductor race. The dispute centers on bonus rules, but the larger risk is damage to Samsung’s credibility as a reliable supplier for major tech customers.

Microsoft previews Shader Model 6.10 for gpu Artificial Intelligence engines

Microsoft has introduced Shader Model 6.10 in AgilitySDK 1.720-preview with a new matrix API designed to unify access to dedicated gpu Artificial Intelligence hardware from AMD, Intel, and NVIDIA. The change is aimed at making neural rendering features easier to deploy across multiple vendors with a single programming model.

Europe’s Artificial Intelligence challenge is structural dependence

Europe has talent, research strength, and rising investment in Artificial Intelligence, but startups remain reliant on American infrastructure, platforms, and late-stage capital. The argument centers on digital sovereignty, interoperability, and ownership as the conditions for building durable European champions.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.