Artificial intelligence systems are driving a rapid expansion of data centres that consume vast amounts of electricity, as training and running modern models depend on clusters of GPUs and specialized chips that constantly shuttle data between memory and processors. Analysts warn that artificial intelligence data centres could soon rival small countries in power demand, intensifying pressure on governments pursuing climate targets and on companies facing rising energy costs. To break this link between performance and power use, researchers are exploring computing approaches that move away from conventional digital logic toward hardware that performs calculations directly where data is stored.
A team at Zhejiang Lab in China has focused on memristors, electronic components described as resistors with memory that can both store information and execute computations in place. Arrays of memristors can physically implement neural networks, with each device representing a weight and computations arising from current flowing through the grid. This in-memory computing slashes the costly back and forth of data that wastes energy in traditional artificial intelligence hardware, but real memristors are noisy and imprecise, so naïvely mapping digital networks onto them degrades accuracy and stability. The Zhejiang researchers report a training method called error‑aware probabilistic update, or EaPU, which accepts small discrepancies between target and actual weights and updates devices only when error crosses a defined threshold. With EaPU, fewer than 0.1% of the network’s parameters are updated at each learning step, dramatically reducing write operations, which are far more energy‑hungry and wearing than reads.
The team’s measurements indicate that training energy consumption was cut by a factor of 50 compared with other memristor‑based methods, device lifetime increased by around 1,000 times thanks to fewer write cycles, and accuracy improved by about 60% relative to previous memristor approaches, reaching a level close to digital supercomputers. Versus conventional GPU‑based systems, overall energy use for training could fall by around a million times in their test setups, pointing to a radically different efficiency profile from today’s data centres. The researchers built a 180 nanometre memristor array and successfully trained neural networks for image denoising and super‑resolution, achieving results comparable to digital training while using vastly less power, although hardware constraints limited model size. They have not yet tested EaPU on large language models with billions or trillions of parameters due to the challenge of building large, reliable arrays, but argue the principle of updating only where errors matter should extend to big text models and to other in‑memory devices such as ferroelectric transistors and magnetoresistive RAM. If approaches like EaPU mature, artificial intelligence workloads could shift to far more frugal data centres and edge devices, substantially easing pressure on electricity grids and emissions while introducing new engineering and tooling challenges around probabilistic, hardware‑aware training.
