The Pentagon is discussing plans to create secure environments where generative Artificial Intelligence companies could train military-specific versions of their models on classified data. Artificial Intelligence models like Anthropic’s Claude are already used to answer questions in classified settings, including applications such as analyzing targets in Iran, but training models directly on classified information would mark a significant shift in how these systems are used inside defense work.
Training versions of Artificial Intelligence models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background. The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. Training would be done in a secure data center that’s accredited to host classified government projects, where a copy of an Artificial Intelligence model is paired with classified data. Though the Department of Defense would remain the owner of the data, personnel from Artificial Intelligence companies might in rare cases access the data if they have appropriate security clearance.
Before allowing this new training, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery. The military has long used computer vision systems to identify objects in drone and aircraft imagery, and companies have already received contracts to train models on that kind of material. Large language model developers have also built government-focused versions of their systems, including Anthropic’s Claude Gov, designed for secure environments and broader language coverage.
Security concerns center on whether classified information learned during training could later be surfaced to users who should not have access to it. Aalok Mehta of the Center for Strategic and International Studies warned that a shared model used across military departments with different classification levels could expose sensitive intelligence, such as the identity of an operative, to the wrong audience inside the Defense Department. He said broader internet leakage is easier to limit if the systems are built correctly, and noted that Palantir has already won contracts to support secure environments for asking models about classified topics without returning that information to Artificial Intelligence companies.
The Pentagon’s push follows a January memo from Defense Secretary Pete Hegseth and reflects a broader effort to bring more generative Artificial Intelligence into combat and administrative work. Current uses include ranking lists of targets, recommending strike priorities, and drafting contracts and reports. Potential future uses for models trained on classified material could include spotting subtle clues in imagery, linking new intelligence with historical context, and processing vast stores of text, audio, images, and video collected in many languages.
