Applied Materials debuts new transistor systems for 2 nm logic chips

Applied Materials is rolling out new deposition, etch and materials modification systems designed to enhance performance and energy efficiency for 2 nm-class Gate-All-Around logic chips used in Artificial Intelligence computing.

Applied Materials introduced new deposition, etch and materials modification systems that boost the performance of leading-edge logic chips at 2 nm and beyond, with the technologies aimed at supercharging Artificial Intelligence compute through atomic-scale improvements to the transistor. The focus is on refining the most fundamental electronic building block so that it can deliver higher performance while maintaining tight control at extremely small geometries.

The transition to Gate-All-Around transistors is described as a major industry inflection and a critical enabler of the energy-efficient computing needed for more powerful Artificial Intelligence chips. As 2 nm-class Gate-All-Around chips ramp to volume production this year, Applied Materials is introducing new material innovations specifically tailored to enhance next-generation Gate-All-Around transistors targeting angstrom nodes, aligning process technology with the aggressive scaling roadmap.

According to the company, the combined impact of the new chipmaking systems contributes a significant portion of the total energy-efficient performance gains of Gate-All-Around process node transitions. By targeting both transistor structures and associated materials processes, the systems are intended to improve overall logic chip capabilities for data-intensive workloads while supporting the continued evolution of energy-efficient Artificial Intelligence computing.

65

Impact Score

Hugging Face launches TRL v1.0 for LLM fine-tuning

Hugging Face has released TRL v1.0 to standardize the post-training workflow behind large language models. The framework packages alignment methods, configuration tools, and scalable training into a more predictable engineering process.

LiteLLM supply chain attack exposes fragile developer trust

A compromised LiteLLM package on PyPI turned a popular Artificial Intelligence gateway into a malware delivery vehicle before a coding mistake exposed the attack. The incident underscored how deeply modern software stacks depend on fragile supply chain trust.

Google compression algorithm targets data center energy use

Google has unveiled TurboQuant, a compression algorithm designed to shrink large language model memory usage and improve efficiency. The approach points to a future where Artificial Intelligence models need less data center capacity and could run on smaller devices.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.