OpenAI’s agreement to let the Pentagon use its technology in classified environments has opened questions about how far its models could reach in military operations tied to Iran. Sam Altman has said the military cannot use the company’s technology to build autonomous weapons, but the agreement relies on the military following its own guidelines. OpenAI has also said the deal will prevent domestic surveillance, though that claim appears uncertain. The shift marks a rapid embrace of military work at a moment when the US is escalating strikes against Iran and relying more heavily on Artificial Intelligence.
One likely use is in targets and strikes. OpenAI’s technology still must be integrated with other military systems before it can operate in classified settings, and the timeline remains unclear. If the Iran conflict is still underway by the time OpenAI’s tech is in the system, it could help a human analyst review a list of potential targets, analyze available information, and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. A human would then be responsible for manually checking these outputs, though that raises questions about how much speed the system really adds if a person is expected to verify the results.
OpenAI’s models may also appear in drone defense through its partnership with Anduril. At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. Anduril already uses its own models to analyze camera footage and sensor data, while OpenAI may be better suited to conversational systems that let soldiers query those tools directly or receive guidance in natural language. The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.
OpenAI is also expanding into military back-office work. In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military to use GenAI.mil, a platform for secure access to commercial Artificial Intelligence models. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions. That work is unlikely to directly shape sensitive decisions in Iran, but it reinforces the Pentagon’s broader effort to push Artificial Intelligence into every layer of military activity, from battlefield analysis to paperwork.
