TSMC showcases custom C-HBM4E as N3P logic dies target double efficiency

TSMC detailed moves for its HBM4 generation, including a shift of custom C-HBM4E logic dies to N3P and base dies to N12 to cut operating voltages and boost efficiency. The company also outlined packaging roadmaps such as CoWoS-L to support up to 12 HBM stacks for 2026 Artificial Intelligence parts and confirmed customers including Micron and SK Hynix.

At the Open Innovation Platform Ecosystem Forum in Amsterdam, TSMC outlined architecture and node changes for HBM4. The company’s custom C-HBM4E logic die is expected to shift to the N3P node with a voltage change from 0.8 V to 0.75 V, a move TSMC says targets roughly 2× better power efficiency versus today’s DRAM processes. Standard HBM4 base dies will also change process: instead of a conventional DRAM process as used in HBM3E, TSMC plans to manufacture HBM4 base dies on its N12 logic node, reducing operating voltage from 1.1 V to 0.8 V and delivering an expected around 1.5× efficiency gain.

For C-HBM4E the base die not only moves to N3P but also integrates memory controllers directly into the stack. Those controller blocks normally sit on the host SoC, and integrating them into the base die makes the PHY a fully custom design. On packaging, TSMC said it is expanding InFO and SoW options while continuing to rely on CoWoS as the main growth driver. The company has already moved from 1.5× to 3.3× reticle sizes with support for eight HBM chips and is progressing to CoWoS-L, enabling up to 12 HBM3E/HBM4 stacks for 2026 Artificial Intelligence parts, followed by a larger A16 generation version planned for 2027.

TSMC is lining up major customers for its custom HBM logic dies. Micron has selected the foundry to build the logic base die for its HBM4E parts, with volume production planned for 2027. SK Hynix is reportedly preparing its first custom HBM4E products for the second half of next year and will use TSMC’s 12 nm process for mainstream server-grade HBM base dies. TSMC’s roadmap also indicates NVIDIA’s top-end GPUs and Google’s TPUs will step up to a 3 nm node for their highest-end designs.

68

Impact Score

Securus trains Artificial Intelligence on inmates’ calls to flag planned crimes

Securus Technologies trained Artificial Intelligence models on years of recorded inmate calls and is piloting tools that scan calls, texts, and emails to predict and prevent crimes. Critics say many people whose communications are used may not know their recordings are being used to train these systems and that the practice raises civil liberties concerns.

Artificial intelligence’s big impact on small business

Small businesses are using Artificial Intelligence tools for marketing, customer service, product descriptions and operations to improve efficiency and competitiveness. Case studies in the article highlight Henry’s House of Coffee, Something Sweet COOKie Dough and Aureate Capital leveraging Artificial Intelligence for SEO, customer insights and research.

SAP launches EU Artificial Intelligence Cloud for Europe’s digital sovereignty

SAP has rolled out EU Artificial Intelligence Cloud to give European organizations full control over data, infrastructure, and Artificial Intelligence applications, reducing dependence on American hyperscalers. The move follows SAP’s earlier announced €20 billion investment in sovereign cloud solutions for Europe.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.