At the Open Innovation Platform Ecosystem Forum in Amsterdam, TSMC outlined architecture and node changes for HBM4. The company’s custom C-HBM4E logic die is expected to shift to the N3P node with a voltage change from 0.8 V to 0.75 V, a move TSMC says targets roughly 2× better power efficiency versus today’s DRAM processes. Standard HBM4 base dies will also change process: instead of a conventional DRAM process as used in HBM3E, TSMC plans to manufacture HBM4 base dies on its N12 logic node, reducing operating voltage from 1.1 V to 0.8 V and delivering an expected around 1.5× efficiency gain.
For C-HBM4E the base die not only moves to N3P but also integrates memory controllers directly into the stack. Those controller blocks normally sit on the host SoC, and integrating them into the base die makes the PHY a fully custom design. On packaging, TSMC said it is expanding InFO and SoW options while continuing to rely on CoWoS as the main growth driver. The company has already moved from 1.5× to 3.3× reticle sizes with support for eight HBM chips and is progressing to CoWoS-L, enabling up to 12 HBM3E/HBM4 stacks for 2026 Artificial Intelligence parts, followed by a larger A16 generation version planned for 2027.
TSMC is lining up major customers for its custom HBM logic dies. Micron has selected the foundry to build the logic base die for its HBM4E parts, with volume production planned for 2027. SK Hynix is reportedly preparing its first custom HBM4E products for the second half of next year and will use TSMC’s 12 nm process for mainstream server-grade HBM base dies. TSMC’s roadmap also indicates NVIDIA’s top-end GPUs and Google’s TPUs will step up to a 3 nm node for their highest-end designs.
