Pentagon blacklist of Anthropic over autonomous weapons alarms Europe

The United States decision to label Anthropic a security risk for refusing military use of its technology in autonomous killing and mass surveillance is raising concerns in Europe about the future of responsible Artificial Intelligence in warfare and the credibility of international norms.

United States secretary of war Pete Hegseth has designated Anthropic a national security threat after the company refused to remove contractual restrictions on the military use of its technology for autonomous killing and mass surveillance. The United States Department of War (DoW) requires Artificial Intelligence companies to permit “all lawful use” of their models without contractual exceptions, but Anthropic sought two explicit carve-outs: no mass domestic surveillance and no fully autonomous weapons. When Anthropic would not abandon these safeguards, Hegseth labeled the company a “supply chain risk”, a designation not previously used against an American firm, and OpenAI signed a contract with the Pentagon just hours later to replace Anthropic in the military supply chain.

The confrontation is seen as a watershed for how Artificial Intelligence will shape modern warfare and European security. Artificial Intelligence is already used in wars in Iran, Ukraine and any future conflict EU countries may be involved in. A King’s College London study found that frontier Artificial Intelligence models deployed tactical nuclear weapons in 20 out of 21 war games, and never once chose de-escalation, highlighting fears that such systems exhibit hallucination, brittleness and an escalation bias that can make lethal errors irreversible. European governments have promoted international commitments on the military use of Artificial Intelligence, including requirements for human control over lethal autonomous systems, but these norms depend on mutual restraint and trust that all major powers, including the United States, remain committed.

The response from Europe is portrayed as critical for both regional security and the wider rules-based order the EU has sought to defend. The analysis urges European policymakers to favor Artificial Intelligence developers that verifiably respect international law in procurement decisions, to direct defense budgets into rigorous reliability and safety testing of military Artificial Intelligence systems, particularly for United States equipment on which Europe depends, and to demand clear assurances from the DoW and OpenAI about how human oversight will be ensured. Political conditions are described as favorable: polling shows 79% of Americans want humans making final decisions on lethal force, a “QuitGPT” boycott is reportedly costing OpenAI subscribers, and hundreds of its employees have supported Anthropic’s stance, prompting new contract language on domestic surveillance. The argument concludes that European safety cannot rest on United States assurances alone and that robust multilateral commitments and verification mechanisms must bind both allies and adversaries, which will be impossible to construct if the United States steps back from restraint and the EU remains silent.

70

Impact Score

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.