OpenAI has agreed to allow the US military to use its Artificial Intelligence technologies in classified settings, after what CEO Sam Altman described as “definitely rushed” negotiations that began only after the Pentagon publicly reprimanded Anthropic. OpenAI publicly emphasized that the deal does not permit its systems to be used for autonomous weapons or mass domestic surveillance, and insisted it did not simply accept the same terms that Anthropic had rejected. The outcome sets up a contrast in strategy: Anthropic tried to enforce explicit moral constraints in its Pentagon contract and lost, while OpenAI adopted a more pragmatic, law-focused approach that is ultimately more accommodating to the Department of Defense.
The core of OpenAI’s approach is an assumption that the government will not break the law, backed by contractual references to existing statutes and policies governing autonomous weapons and surveillance. The partial contract excerpt the company released cites measures as specific as a 2023 Pentagon directive on autonomous weapons, which issues design and testing guidelines rather than an outright ban, and as broad as the Fourth Amendment, which underpins protections against mass surveillance. Legal scholar Jessica Tillipman noted that the excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” instead stating that the Pentagon cannot use OpenAI’s technology in ways that violate current law. Critics who rallied behind Anthropic argue that these laws and directives are not sufficient to prevent Artificial Intelligence enabled autonomous weapons or sweeping surveillance, and they point to historical episodes such as the surveillance practices exposed by Edward Snowden, which were internally deemed lawful before later being ruled unlawful.
OpenAI presents a second safeguard by claiming it will retain control over model safety policies and will not provide the military with versions stripped of safety controls, asserting that it can embed “red lines” against mass surveillance and weapon systems without human involvement directly into model behavior. However, the company has not detailed how these rules differ from those for ordinary users, and the protections must be implemented in a classified environment on an aggressive six-month timeline. Beneath the contract battle lies a broader dispute over whether technology firms should refuse to support legally permitted but morally objectionable military uses; the Pentagon reacted furiously to Anthropic’s attempt to draw such lines, with Defense Secretary Pete Hegseth denouncing the company and ordering that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” a threat Anthropic plans to challenge in court and that OpenAI has criticized. The Pentagon now faces the practical challenge of phasing out Anthropic’s Claude, reportedly already used in classified operations including some in Venezuela and in strikes on Iran, within six months while bringing in OpenAI and Elon Musk’s xAI, illustrating how a fast-moving Artificial Intelligence acceleration strategy is pressuring companies to relax earlier red lines against contentious military applications.
