Pentagon weighs training Artificial Intelligence models on classified data

The Pentagon is exploring secure setups that would let generative Artificial Intelligence companies train military-specific models on classified information. The approach could improve performance on defense tasks while introducing new risks around leakage and access control.

The Pentagon is discussing plans to create secure environments where generative Artificial Intelligence companies could train military-specific versions of their models on classified data. Artificial Intelligence models like Anthropic’s Claude are already used to answer questions in classified settings, including applications such as analyzing targets in Iran, but training models directly on classified information would mark a significant shift in how these systems are used inside defense work.

Training versions of Artificial Intelligence models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background. The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. Training would be done in a secure data center that’s accredited to host classified government projects, where a copy of an Artificial Intelligence model is paired with classified data. Though the Department of Defense would remain the owner of the data, personnel from Artificial Intelligence companies might in rare cases access the data if they have appropriate security clearance.

Before allowing this new training, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery. The military has long used computer vision systems to identify objects in drone and aircraft imagery, and companies have already received contracts to train models on that kind of material. Large language model developers have also built government-focused versions of their systems, including Anthropic’s Claude Gov, designed for secure environments and broader language coverage.

Security concerns center on whether classified information learned during training could later be surfaced to users who should not have access to it. Aalok Mehta of the Center for Strategic and International Studies warned that a shared model used across military departments with different classification levels could expose sensitive intelligence, such as the identity of an operative, to the wrong audience inside the Defense Department. He said broader internet leakage is easier to limit if the systems are built correctly, and noted that Palantir has already won contracts to support secure environments for asking models about classified topics without returning that information to Artificial Intelligence companies.

The Pentagon’s push follows a January memo from Defense Secretary Pete Hegseth and reflects a broader effort to bring more generative Artificial Intelligence into combat and administrative work. Current uses include ranking lists of targets, recommending strike priorities, and drafting contracts and reports. Potential future uses for models trained on classified material could include spotting subtle clues in imagery, linking new intelligence with historical context, and processing vast stores of text, audio, images, and video collected in many languages.

68

Impact Score

OpenAI’s Pentagon access and xAI’s Grok lawsuit lead the day

OpenAI’s decision to give the Pentagon access to its Artificial Intelligence is raising questions about how quickly generative systems could move into military operations. Meanwhile, xAI is facing a lawsuit alleging Grok enabled the creation of child sexual abuse material.

NVIDIA details DLSS 5 image quality goals

NVIDIA says DLSS 5 is designed to deliver real-time neural rendering while preserving the visual direction developers intended for each frame. The technology combines lighting, material, and temporal improvements to keep enhanced images consistent with game content.

European Union moves to streamline and tighten Artificial Intelligence rules

The European Union is advancing parallel efforts to simplify parts of its Artificial Intelligence rulebook while moving toward tougher restrictions on tools used to create non-consensual sexual content. The latest steps combine broader regulatory streamlining with targeted action against harmful image and audio generation systems.

Y Combinator machine learning startups in 2026

Y Combinator’s 2026 machine learning directory highlights a broad mix of startups spanning infrastructure, robotics, healthcare, developer tools, data systems, and enterprise software. The list shows how deeply Artificial Intelligence and machine learning are being applied across industrial, scientific, and business workflows.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.