US military presses Anthropic to relax Claude safety limits

Senior US defense officials are pressuring Anthropic to loosen safeguards on its Claude model, threatening contracts and security designations if the company does not allow broader military uses. The clash highlights growing tensions over how far Artificial Intelligence firms will go in enabling battlefield and surveillance applications.

Senior US military leaders, including defense secretary Pete Hegseth, met Anthropic executives on Tuesday to resolve a deepening dispute over how the Pentagon can use the company’s Claude large language model. Hegseth reportedly gave Anthropic chief executive Dario Amodei until the end of the day on Friday to accept the Department of Defense’s terms or face penalties. Defense officials want broad access to Claude’s capabilities across military operations, while Anthropic has pushed back on uses such as mass surveillance and autonomous weapons systems capable of killing without human input.

The Department of Defense has already integrated Claude into some of its systems but has threatened to cut ties over what top officials view as restrictive guardrails. Punitive options under consideration include canceling a massive contract with Anthropic and labeling the company a “supply chain risk”. The Pentagon signed agreements with several major Artificial Intelligence firms, including Anthropic, Google and OpenAI, in July last year, offering them contracts worth up to $200m. Until this week Claude was the only model cleared for use in classified systems, but a new deal signed on Monday allowed military personnel to deploy Elon Musk’s xAI chatbot in those environments, despite controversy over the chatbot’s role in producing nonconsensual sexualized images of children.

Both xAI and OpenAI have agreed to the Pentagon’s terms, with a defense official saying OpenAI had allowed its model to be used for “all lawful purposes”. The current dispute follows reports that the US military used Claude to help capture Venezuelan leader Nicolás Maduro, and comes amid a broader Trump administration push to integrate Artificial Intelligence into warfare and win what Donald Trump describes as a global Artificial Intelligence arms race. Emil Michael, the Pentagon’s chief technology officer and a former Uber executive, has urged Anthropic to “cross the Rubicon” and align its guardrails with US “Department of War” use cases, provided they are lawful. Amodei has consistently advocated for stronger Artificial Intelligence regulation, backed a political action committee focused on safeguards, opposed Trump in the 2024 presidential race and hired several former Biden staffers, factors that reportedly helped drive a pro-Trump venture capital firm to walk away from investing in Anthropic. Against this political and commercial backdrop, the Pentagon’s multibillion-dollar investment in Artificial Intelligence-enabled drones and targeting systems is intensifying urgent ethical questions over how much lethal decision-making power should be delegated to machines, as conflicts such as the war in Ukraine already feature deadly semiautonomous drones operating with limited human control.

70

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.