A federal judge in San Francisco temporarily blocked the Pentagon from labeling Artificial Intelligence company Anthropic a supply chain risk and halted enforcement of President Donald Trump’s social media directive ordering federal agencies to stop using Anthropic and its chatbot Claude. U.S. District Judge Rita Lin said the administration’s actions appeared arbitrary and capricious and could seriously damage the company.
Lin’s ruling followed a 90-minute hearing in San Francisco federal court on Tuesday, where she pressed the government on why it took the extraordinary step of punishing Anthropic after negotiations over a defense contract broke down. The dispute centered on Anthropic’s effort to prevent its Artificial Intelligence technology from being used in fully autonomous weapons or in surveillance of Americans. Anthropic argued that the government had imposed an unjustified stigma as part of an unlawful campaign of retaliation, while the Pentagon maintained it should be able to use Claude in any way it deems lawful.
Lin said the case was not about resolving the broader policy fight over military or domestic uses of Artificial Intelligence, but about whether the government’s response was lawful. She wrote that the broad punitive measures, including Defense Secretary Pete Hegseth’s use of a rare military authority previously directed at foreign adversaries, appeared designed to punish Anthropic rather than protect legitimate government interests. She also wrote that nothing in the governing statute supports treating an American company as a potential adversary for disagreeing with the government.
The order is delayed for a week and doesn’t require the Pentagon to use Anthropic’s products or prevent it from transitioning to other Artificial Intelligence providers. Anthropic said it was grateful for the swift ruling and pleased the court agreed it was likely to succeed on the merits. The company said it brought the case to protect its business and customers while remaining focused on working productively with the government to ensure Americans benefit from safe, reliable Artificial Intelligence.
Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case concerns a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. The Pentagon did not immediately respond to a request for comment. Supporting briefs were filed by Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.
