Samsung is overhauling the power delivery network in its HBM4E memory to address growing engineering challenges in next generation Artificial Intelligence chips as power density and thermal stress climb. The work follows the recent shipment of its first commercial HBM4, which is already pushing 11.7 Gbps consistently with headroom up to 13 Gbps, and sets up HBM4E as a more robust evolution of that design. As designs move from HBM4 to HBM4E, the number of power bumps grows from 13,682 to 14,457, and these additional bumps are packed into the same space with thinner, denser wiring that exacerbates electrical and thermal issues.
The higher bump count and denser routing drive up current density and resistance, which causes IR drop as voltage weakens while it travels through the circuits, and the heat generated in that process worsens the problem. This interaction creates a feedback loop that can hurt performance or even lead to circuit failure if not controlled. To counter these effects, Samsung focused on rearchitecting the power delivery rather than simply scaling existing layouts, targeting both the metal hierarchy and the physical distribution of power blocks on the base die to ease congestion and shorten critical paths.
To fix this, Samsung segmented the power network by breaking up the large centralized MET4 power block on the base die that had previously been laid out in big honeycomb like sections near the interposer. That monolithic structure has been split into four smaller sections, and the upper layers are further broken up to reduce congestion and shorten routing paths so current flows more evenly. According to Samsung, the results were significant, metal circuit defects dropped 97% compared to HBM4, and IR drop improved by 41%, which gives the chip more voltage headroom for higher speeds and improves overall reliability under demanding Artificial Intelligence workloads.
