Samsung explores 2 nm custom logic dies for next wave of HBM

Samsung is reportedly developing custom HBM logic dies on process nodes ranging from 4 nm to 2 nm, aiming to serve next generation Artificial Intelligence accelerators and enterprise demand after HBM4E. The effort is led by a new custom system on chip team inside its system LSI division.

A report from ZDNet South Korea describes an early stage 2 nm project underway at Samsung Semiconductor that targets custom logic dies for high bandwidth memory. Industry sources cited in the article say Samsung’s HBM development team is exploring ways to tailor these logic dies to individual product requirements rather than relying solely on standardized designs. The company is rumored to be evaluating foundry process nodes described as ‘as advanced as 2 nm’ for its next wave of HBM products, although the report notes that it is not yet clear whether engineers will use Samsung’s SF2 or SF2P variants.

The article states that Samsung’s sixth generation HBM line, referred to as HBM4, is expected to use a 4 nm process node that is likely drawn from the SF4 family. An anonymous company insider is quoted as saying that ‘Samsung Electronics is designing a custom logic die for HBM under the leadership of the custom SoC (system-on-chip) team newly established within the System LSI Business Division last year…We are building a portfolio ranging from 4 nm to 2 nm to respond to the needs of various customers.’ This indicates a strategic shift toward a broader process portfolio for memory logic that can be tuned for performance, power, and integration needs across different clients.

According to the ZDNet report, Samsung expects that next generation ultra high performance Artificial Intelligence accelerators will depend on the most advanced HBM modules available. The article says that ‘strong’ Artificial Intelligence enterprise industry demand is anticipated once 2 nm logic dies become viable, which is described as possibly post-2027 after the introduction of HBM4E, the seventh generation of Samsung’s HBM products. Together, these details suggest Samsung is positioning its custom HBM logic and process roadmap to capture future demand from cloud, data center, and accelerator vendors that are seeking tighter coupling between memory and compute.

58

Impact Score

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.