Senior US military leaders, including defense secretary Pete Hegseth, met Anthropic executives on Tuesday to resolve a deepening dispute over how the Pentagon can use the company’s Claude large language model. Hegseth reportedly gave Anthropic chief executive Dario Amodei until the end of the day on Friday to accept the Department of Defense’s terms or face penalties. Defense officials want broad access to Claude’s capabilities across military operations, while Anthropic has pushed back on uses such as mass surveillance and autonomous weapons systems capable of killing without human input.
The Department of Defense has already integrated Claude into some of its systems but has threatened to cut ties over what top officials view as restrictive guardrails. Punitive options under consideration include canceling a massive contract with Anthropic and labeling the company a “supply chain risk”. The Pentagon signed agreements with several major Artificial Intelligence firms, including Anthropic, Google and OpenAI, in July last year, offering them contracts worth up to $200m. Until this week Claude was the only model cleared for use in classified systems, but a new deal signed on Monday allowed military personnel to deploy Elon Musk’s xAI chatbot in those environments, despite controversy over the chatbot’s role in producing nonconsensual sexualized images of children.
Both xAI and OpenAI have agreed to the Pentagon’s terms, with a defense official saying OpenAI had allowed its model to be used for “all lawful purposes”. The current dispute follows reports that the US military used Claude to help capture Venezuelan leader Nicolás Maduro, and comes amid a broader Trump administration push to integrate Artificial Intelligence into warfare and win what Donald Trump describes as a global Artificial Intelligence arms race. Emil Michael, the Pentagon’s chief technology officer and a former Uber executive, has urged Anthropic to “cross the Rubicon” and align its guardrails with US “Department of War” use cases, provided they are lawful. Amodei has consistently advocated for stronger Artificial Intelligence regulation, backed a political action committee focused on safeguards, opposed Trump in the 2024 presidential race and hired several former Biden staffers, factors that reportedly helped drive a pro-Trump venture capital firm to walk away from investing in Anthropic. Against this political and commercial backdrop, the Pentagon’s multibillion-dollar investment in Artificial Intelligence-enabled drones and targeting systems is intensifying urgent ethical questions over how much lethal decision-making power should be delegated to machines, as conflicts such as the war in Ukraine already feature deadly semiautonomous drones operating with limited human control.
