Trump bans US government use of Anthropic in clash over Artificial Intelligence safeguards

President Donald Trump has ordered federal agencies to stop using Anthropic’s Artificial Intelligence tools after the company refused Pentagon demands for unrestricted military use, triggering a rare move to label a major tech firm a national security supply chain risk.

President Donald Trump has ordered every federal agency to immediately halt the use of Anthropic’s Artificial Intelligence tools, escalating a confrontation over how advanced models can be deployed in warfare and domestic security. In a Truth Social post, Trump said the government “will not do business” with Anthropic again and warned he would use the “Full Power of the Presidency” to ensure compliance during a phase out lasting the next six months. The move follows Anthropic’s refusal to accept US military demands for what officials described as access to “any lawful use” of its technology.

The dispute centers on Anthropic’s internal safeguards, which are designed to prevent uses it describes as “mass surveillance” and “fully autonomous weapons”. After negotiations broke down, US Secretary of Defense Pete Hegseth said he had deemed Anthropic a “supply chain risk”, a designation that would make it the first US company to be publicly treated this way and would prohibit any business working with the military from “any commercial activity with Anthropic”. Anthropic responded that it had not received direct notice from the White House or the military on the status of talks and vowed to challenge any supply chain risk designation in court, calling it legally unsound and a dangerous precedent for companies negotiating with the government.

Anthropic has supplied Artificial Intelligence tools, including its Claude system, to US government and military customers since 2024 and was the first advanced Artificial Intelligence company to have its models used in agencies handling classified work. Its Pentagon work is part of a contract worth 200m, and the company’s most recent valuation earlier this month put its worth at 380 billion, according to the report. A former defense department official said Anthropic appeared to hold the upper hand in the dispute, arguing the company “simply do not need the money” and describing the legal basis for invoking the Defense Production Act or labeling the firm a supply chain risk as “extremely flimsy”.

The clash has widened into an industry-wide debate over military access to commercial Artificial Intelligence models. OpenAI chief executive Sam Altman privately told staff he shared Anthropic’s “red lines” on uses such as domestic surveillance and autonomous offensive weapons, and said any OpenAI military contracts would reject those applications. Altman later confirmed on X that OpenAI had struck a deal with the Department of War to run its models on classified cloud networks, even as he questioned how Anthropic’s Pentagon and Palantir arrangements were originally structured. Anthropic chief executive Dario Amodei, a former OpenAI executive who left to cofound the rival startup, was summoned to Washington DC for high-stakes talks in which Hegseth threatened both the Defense Production Act and the supply chain risk label if the company did not grant unfettered access. By Thursday, Amodei said he preferred to end Pentagon work rather than comply, while Anthropic told customers that the primary impact of Trump’s order would fall on firms that also contract with the military, which may have to stop using Anthropic on defense-related projects.

78

Impact Score

Cloud and data center spending accelerates artificial intelligence expansion

Cloud providers, chipmakers, and enterprises are escalating multi-billion dollar investments to build out artificial intelligence and cloud infrastructure across key global markets. Strategic deals and partnerships are reshaping data center footprints, sovereign cloud offerings, and access to high-performance compute.

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.