Generative Artificial Intelligence´s Environmental and Societal Impacts Highlighted in GAO Report

A GAO assessment reveals significant environmental and human effects of generative Artificial Intelligence, urging policymakers to address resource use, labor changes, and risks linked to these rapidly advancing technologies.

The U.S. Government Accountability Office (GAO) has released a comprehensive report evaluating the far-reaching effects of generative Artificial Intelligence on environmental resources and human society. While generative Artificial Intelligence promises transformative productivity and innovation across multiple industries—ranging from enhanced customer service automation to advanced content creation—the technology relies heavily on substantial energy and water inputs. Despite its widespread adoption, disclosure and monitoring around generative Artificial Intelligence’s electricity and water use remain limited, making it difficult to fully gauge its environmental footprint.

The report highlights that estimates of energy use have centered on how much power is consumed during the training of large generative Artificial Intelligence models, and the resulting carbon emissions. The generative Artificial Intelligence boom is a major factor behind growing demand for datacenters, which the GAO notes could account for as much as 6% of U.S. electricity consumption by 2026, up from 4% in 2022. However, concrete figures for how much of this usage directly results from generative Artificial Intelligence remain elusive, as companies frequently do not release granular data—particularly regarding water consumption for cooling systems.

In addition to environmental risks, generative Artificial Intelligence presents several human-scale challenges. These include job displacement, the proliferation of misinformation (such as deepfakes), increased cybersecurity concerns, and potential threats to personal safety. The GAO identified five categories of human effects, emphasizing difficulties in providing definitive risk assessments due to the technology’s rapid evolution and the lack of full transparency from private developers. To address these intertwined challenges, the report outlines policy options: maintaining current practices; improving industry data collection and disclosure; encouraging innovation for more efficient algorithms and hardware; promoting the adoption of risk management frameworks; and sharing best practices or developing standards. The GAO advocates for a combination of these actions by legislators, regulators, industry, and research institutions to better understand and balance the benefits and risks of generative Artificial Intelligence technologies as development accelerates.

75

Impact Score

Intel unveils massive artificial intelligence processor test vehicle showcasing advanced packaging

Intel Foundry has revealed an experimental artificial intelligence chip test vehicle that uses an 8 reticle-sized package with multiple logic and memory tiles to demonstrate its latest manufacturing and packaging capabilities. The design highlights how Intel intends to build next-generation multi-chiplet artificial intelligence and high performance computing processors with advanced interconnects and power delivery.

Reward models inherit value biases from large language model foundations

New research shows that reward models used to align large language models inherit systematic value biases from their pre-trained foundations, with Llama and Gemma models diverging along agency and communion dimensions. The work raises fresh safety questions about treating base model choice as a purely technical performance decision in Artificial Intelligence alignment pipelines.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.