Artificial Intelligence’s massive appetite for power has become a front-and-center issue as the technology rapidly grows, but much of the discussion has focused primarily on the training of sophisticated models. Now, MIT Technology Review’s new Artificial Intelligence energy package dives deeper, examining not just the known energy consumption but exposing how demand is poised to expand as people interact with Artificial Intelligence through tasks like answering text queries, generating images, or creating videos—a process known as inference. Reporters spent six months investigating how these everyday uses, predicted by experts to soon surpass the already enormous training phase, will shape future energy usage in tangible ways.
This special series outlines broad impacts, from environmental concerns to industry promises. The MIT team provides a detailed brief on quantifying Artificial Intelligence’s energy and emissions footprint, aiming to demystify both how and where this power is sourced and who ultimately pays for it. Their investigation led them to Nevada’s deserts, where sprawling data centers demand massive water resources for cooling, and to Louisiana, the site of Meta’s planned largest data center, where the drive to expand Artificial Intelligence relies on energy sources with hidden environmental costs. The overview also casts skepticism on clean energy solutions, such as nuclear, stating that powering Artificial Intelligence entirely through such methods is unlikely to become reality in the near term. Still, the package includes reasons for optimism, suggesting advancements may usher in more efficient, less energy-intensive Artificial Intelligence systems in years to come.
Alongside this, the newsletter rounds up broader tech news: research reveals large language models can be highly persuasive, raising questions about Artificial Intelligence’s influence on opinions; courts grapple with Artificial Intelligence errors, including hallucinated information in legal filings; and new laws, like the US Take It Down Act, target issues such as non-consensual intimate imagery, including deepfakes, accelerated by emerging tech. Other highlights cover corporate Artificial Intelligence partnerships, the divestment of 23andMe, advances in autonomous vehicles and blended alternative meats, and labor disputes over Artificial Intelligence-generated voices in gaming. The edition concludes with philosophical reflections on the uneasy but growing relationships between people and intelligent machines, as humanity navigates new questions about trust, ethics, and the boundaries separating human and digital minds.