United States secretary of war Pete Hegseth has designated Anthropic a national security threat after the company refused to remove contractual restrictions on the military use of its technology for autonomous killing and mass surveillance. The United States Department of War (DoW) requires Artificial Intelligence companies to permit “all lawful use” of their models without contractual exceptions, but Anthropic sought two explicit carve-outs: no mass domestic surveillance and no fully autonomous weapons. When Anthropic would not abandon these safeguards, Hegseth labeled the company a “supply chain risk”, a designation not previously used against an American firm, and OpenAI signed a contract with the Pentagon just hours later to replace Anthropic in the military supply chain.
The confrontation is seen as a watershed for how Artificial Intelligence will shape modern warfare and European security. Artificial Intelligence is already used in wars in Iran, Ukraine and any future conflict EU countries may be involved in. A King’s College London study found that frontier Artificial Intelligence models deployed tactical nuclear weapons in 20 out of 21 war games, and never once chose de-escalation, highlighting fears that such systems exhibit hallucination, brittleness and an escalation bias that can make lethal errors irreversible. European governments have promoted international commitments on the military use of Artificial Intelligence, including requirements for human control over lethal autonomous systems, but these norms depend on mutual restraint and trust that all major powers, including the United States, remain committed.
The response from Europe is portrayed as critical for both regional security and the wider rules-based order the EU has sought to defend. The analysis urges European policymakers to favor Artificial Intelligence developers that verifiably respect international law in procurement decisions, to direct defense budgets into rigorous reliability and safety testing of military Artificial Intelligence systems, particularly for United States equipment on which Europe depends, and to demand clear assurances from the DoW and OpenAI about how human oversight will be ensured. Political conditions are described as favorable: polling shows 79% of Americans want humans making final decisions on lethal force, a “QuitGPT” boycott is reportedly costing OpenAI subscribers, and hundreds of its employees have supported Anthropic’s stance, prompting new contract language on domestic surveillance. The argument concludes that European safety cannot rest on United States assurances alone and that robust multilateral commitments and verification mechanisms must bind both allies and adversaries, which will be impossible to construct if the United States steps back from restraint and the EU remains silent.
