Judge temporarily blocks Pentagon action against Anthropic

A federal judge temporarily barred the Pentagon from labeling Anthropic a supply chain risk and blocked enforcement of a presidential directive telling agencies to stop using the company’s chatbot Claude. The ruling found the government’s measures appeared punitive and likely unlawful.

A federal judge in San Francisco temporarily blocked the Pentagon from labeling Artificial Intelligence company Anthropic a supply chain risk and halted enforcement of President Donald Trump’s social media directive ordering federal agencies to stop using Anthropic and its chatbot Claude. U.S. District Judge Rita Lin said the administration’s actions appeared arbitrary and capricious and could seriously damage the company.

Lin’s ruling followed a 90-minute hearing in San Francisco federal court on Tuesday, where she pressed the government on why it took the extraordinary step of punishing Anthropic after negotiations over a defense contract broke down. The dispute centered on Anthropic’s effort to prevent its Artificial Intelligence technology from being used in fully autonomous weapons or in surveillance of Americans. Anthropic argued that the government had imposed an unjustified stigma as part of an unlawful campaign of retaliation, while the Pentagon maintained it should be able to use Claude in any way it deems lawful.

Lin said the case was not about resolving the broader policy fight over military or domestic uses of Artificial Intelligence, but about whether the government’s response was lawful. She wrote that the broad punitive measures, including Defense Secretary Pete Hegseth’s use of a rare military authority previously directed at foreign adversaries, appeared designed to punish Anthropic rather than protect legitimate government interests. She also wrote that nothing in the governing statute supports treating an American company as a potential adversary for disagreeing with the government.

The order is delayed for a week and doesn’t require the Pentagon to use Anthropic’s products or prevent it from transitioning to other Artificial Intelligence providers. Anthropic said it was grateful for the swift ruling and pleased the court agreed it was likely to succeed on the merits. The company said it brought the case to protect its business and customers while remaining focused on working productively with the government to ensure Americans benefit from safe, reliable Artificial Intelligence.

Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case concerns a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. The Pentagon did not immediately respond to a request for comment. Supporting briefs were filed by Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.

68

Impact Score

Self-adaptive framework extracts earthquake data from web pages

A self-adaptive large language model framework is designed to extract and structure earthquake information from heterogeneous web sources by generating, validating, and reusing extraction schemas. In controlled tests, GPT_OSS delivered the strongest extraction quality, while selector errors were concentrated in wrong element selection and missing content.

Study finds widespread weaknesses in autonomous agents

A multi-institution study found that autonomous agents across several sectors are highly exposed to tool-chaining, goal drift, and memory poisoning attacks. The findings suggest agentic systems face broader and deeper security risks than stateless large language models.

Federal safety net unprepared for Artificial Intelligence job losses

Economists are warning that the federal system designed to support displaced workers is not equipped for a wave of job losses tied to Artificial Intelligence. Existing unemployment benefits and retraining programs are widely seen as too limited to manage broad disruption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.