Judge temporarily blocks Pentagon action against Anthropic

A federal judge temporarily barred the Pentagon from labeling Anthropic a supply chain risk and blocked enforcement of a presidential directive telling agencies to stop using the company’s chatbot Claude. The ruling found the government’s measures appeared punitive and likely unlawful.

A federal judge in San Francisco temporarily blocked the Pentagon from labeling Artificial Intelligence company Anthropic a supply chain risk and halted enforcement of President Donald Trump’s social media directive ordering federal agencies to stop using Anthropic and its chatbot Claude. U.S. District Judge Rita Lin said the administration’s actions appeared arbitrary and capricious and could seriously damage the company.

Lin’s ruling followed a 90-minute hearing in San Francisco federal court on Tuesday, where she pressed the government on why it took the extraordinary step of punishing Anthropic after negotiations over a defense contract broke down. The dispute centered on Anthropic’s effort to prevent its Artificial Intelligence technology from being used in fully autonomous weapons or in surveillance of Americans. Anthropic argued that the government had imposed an unjustified stigma as part of an unlawful campaign of retaliation, while the Pentagon maintained it should be able to use Claude in any way it deems lawful.

Lin said the case was not about resolving the broader policy fight over military or domestic uses of Artificial Intelligence, but about whether the government’s response was lawful. She wrote that the broad punitive measures, including Defense Secretary Pete Hegseth’s use of a rare military authority previously directed at foreign adversaries, appeared designed to punish Anthropic rather than protect legitimate government interests. She also wrote that nothing in the governing statute supports treating an American company as a potential adversary for disagreeing with the government.

The order is delayed for a week and doesn’t require the Pentagon to use Anthropic’s products or prevent it from transitioning to other Artificial Intelligence providers. Anthropic said it was grateful for the swift ruling and pleased the court agreed it was likely to succeed on the merits. The company said it brought the case to protect its business and customers while remaining focused on working productively with the government to ensure Americans benefit from safe, reliable Artificial Intelligence.

Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case concerns a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. The Pentagon did not immediately respond to a request for comment. Supporting briefs were filed by Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.

68

Impact Score

Why extended Artificial Intelligence reasoning may be wasted spend

Research and practical testing suggest many reasoning models generate long chains of thought that do not materially improve answers on routine tasks. That could mean much of the cost of premium Artificial Intelligence usage goes toward visible and invisible performance rather than better results.

DRAM stocks fall after Google TurboQuant debut

DRAM manufacturers came under pressure after Google introduced TurboQuant, which it says can sharply reduce the memory needs of Artificial Intelligence models while speeding up inference. The announcement coincided with notable declines in shares of Micron, SK Hynix, and Samsung Electronics.

Nature paper details the Artificial Intelligence scientist project

Sakana Artificial Intelligence and academic collaborators have published a Nature paper describing The Artificial Intelligence Scientist, a system designed to automate the full machine learning research lifecycle. The work reports peer review results, reviewer benchmarking, and limits that still constrain the system.

EU Artificial Intelligence Act prohibited practices overview

A LexisNexis practice note examines Article 5 of the EU Artificial Intelligence Act and the practices banned for posing unacceptable risks to EU values and fundamental rights. It also addresses enforcement, liability, and contractual considerations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.