Intel 14A node expected to cost more than 18A due to high-na EUV

Intel´s CFO said the 14A node will carry higher wafer costs than 18A largely because it will use High-NA EUV tools. Intel says 14A also brings performance and power-efficiency gains enabled by RibbonFET 2, PowerDirect and Turbo Cells.

At Citibank´s Global 2025 TMT conference Intel chief financial officer David Zinsner said the company´s upcoming 14A process node is expected to be more expensive than 18A. Zinsner qualified the statement by saying the difference is not a large increase in investment spending but manifests as a higher wafer cost, driven in part by the plan to use High-NA EUV lithography tools on 14A, which were not used for 18A. The article indicates the specific cost of the Twinscan EXE:5200B High-NA EUV tool is not stated.

Intel will need to raise wafer prices to break even on the higher tool costs if it intends to attract external customers to 14A. That pricing implication follows directly from the higher per-wafer expense tied to High-NA EUV adoption. Zinsner´s comments frame the economic trade-off between deploying more advanced lithography equipment and preserving competitive wafer pricing for foundry or external customer engagements.

Technically, Intel claims 14A will deliver meaningful efficiency and performance improvements compared with 18A. The company projects roughly 15 to 20 percent better performance per watt, or a 25 to 35 percent reduction in power consumption. The node combines several process innovations: RibbonFET 2 updates Intel´s gate-all-around transistor architecture, PowerDirect relocates the power delivery network to the chip backside to feed transistor sources and drains more directly, and Turbo Cells are introduced as taller high-drive cells embedded in compact standard-cell libraries to shave critical timing paths and raise CPU and GPU frequencies without large area or power penalties. By contrast, further scaling of 18A is said to depend on more capable lithography tools with higher resolution so the node can avoid multi-patterning.

72

Impact Score

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

AMD taps GlobalFoundries for co-packaged optics in Instinct MI500

AMD is preparing a renewed manufacturing link with GlobalFoundries to bring co-packaged optics to its Instinct MI500 Artificial Intelligence accelerators. The move is aimed at improving bandwidth and power efficiency in data center systems by moving beyond copper-based interconnects.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.