A growing clash between the Department of Defense and artificial intelligence firms is spotlighting a core uncertainty in US law: whether the government is allowed to conduct mass surveillance on Americans using commercial data and advanced analytics. The standoff began when the Pentagon sought to use Anthropic’s Claude system to analyze bulk commercial data on Americans, prompting Anthropic to insist its artificial intelligence not be used for mass domestic surveillance or autonomous weapons. After negotiations collapsed, the Pentagon labeled Anthropic a supply chain risk, while rival OpenAI initially agreed to let the Pentagon use its systems for “all lawful purposes,” triggering backlash from users and protesters concerned about domestic monitoring.
OpenAI has now revised its agreement to state that its artificial intelligence system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” including through “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Chief executive officer Sam Altman has indicated that existing law already bars domestic surveillance by the Department of Defense, while Anthropic’s Dario Amodei contends that such surveillance is effectively legal because statutes have not kept pace with artificial intelligence capabilities. Legal scholars note that vast amounts of public and commercially available information, from social media posts to mobile location and web browsing records, fall outside meaningful constitutional or statutory limits, and the government can purchase this data and then use artificial intelligence to analyze it at scale.
Experts argue that artificial intelligence supercharges surveillance by aggregating disparate data points that are not individually sensitive or regulated and turning them into powerful profiles and behavioral insights. As long as the underlying information is collected lawfully, agencies can feed it into artificial intelligence systems without clear legal constraints. OpenAI’s new contractual language may not significantly restrict what the Pentagon considers lawful uses, and questions remain about inadvertent surveillance, monitoring of foreign nationals or undocumented immigrants in the US, and who decides when the law changes. The company also promises technical safeguards such as a “safety stack” and in-house oversight, but it is unclear how far these measures can practically limit military operations. Former officials warn that allowing a private vendor to disable tools during legitimate national security missions could itself be dangerous, underscoring the need for congressional “hard lines” rather than private negotiations. Senator Ron Wyden is seeking bipartisan backing for legislation, including the Fourth Amendment Is Not For Sale Act first introduced in 2021, and has warned that creating artificial intelligence profiles of Americans from commercially purchased data represents a chilling expansion of mass surveillance that should not be permitted.
