Fraunhofer, TSRI partner on ferroelectric transistors for low-power memory for Artificial Intelligence chips

A German-Taiwanese team is developing hafnium oxide ferroelectric field-effect transistors for memory nodes smaller than 3 nm to enable computing directly in memory and cut energy use in Artificial Intelligence chips and edge devices.

Fraunhofer IPMS, Fraunhofer IMWS, and the Taiwanese research institute TSRI have launched a joint research program to develop new memory for leading chip technologies smaller than 3 nm. The project focuses on nanosheet devices built from ferroelectric field-effect transistors, FeMFETs, using hafnium oxide. These devices are presented as particularly efficient and are designed to enable computing operations directly in memory, a capability the partners say will drastically reduce energy consumption compared with conventional architectures that separate memory and compute.

The collaboration is motivated by rapidly growing demand for Artificial Intelligence and neuromorphic computing. The article identifies a key bottleneck as the transfer of data between main memory and the computing unit, which drives up latency and energy consumption in data centers and edge systems. By enabling compute-in-memory, the nanosheet FeMFET approach aims to lower both latency and energy use for workloads that are memory bound.

The research partners frame the work as laying the foundation for the next generation of energy-efficient Artificial Intelligence chips across a range of devices. Target applications cited include smartphones, automobiles, and medical devices. The project combines expertise from the two Fraunhofer institutes and TSRI to tailor ferroelectric transistor technology to sub-3 nm process nodes, with the goal of integrating low-power memory solutions into future chips for both data center and edge deployments.

62

Impact Score

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.