Pentagon weighs training Artificial Intelligence models on classified data

The Pentagon is exploring secure setups that would let generative Artificial Intelligence companies train military-specific models on classified information. The approach could improve performance on defense tasks while introducing new risks around leakage and access control.

The Pentagon is discussing plans to create secure environments where generative Artificial Intelligence companies could train military-specific versions of their models on classified data. Artificial Intelligence models like Anthropic’s Claude are already used to answer questions in classified settings, including applications such as analyzing targets in Iran, but training models directly on classified information would mark a significant shift in how these systems are used inside defense work.

Training versions of Artificial Intelligence models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background. The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. Training would be done in a secure data center that’s accredited to host classified government projects, where a copy of an Artificial Intelligence model is paired with classified data. Though the Department of Defense would remain the owner of the data, personnel from Artificial Intelligence companies might in rare cases access the data if they have appropriate security clearance.

Before allowing this new training, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery. The military has long used computer vision systems to identify objects in drone and aircraft imagery, and companies have already received contracts to train models on that kind of material. Large language model developers have also built government-focused versions of their systems, including Anthropic’s Claude Gov, designed for secure environments and broader language coverage.

Security concerns center on whether classified information learned during training could later be surfaced to users who should not have access to it. Aalok Mehta of the Center for Strategic and International Studies warned that a shared model used across military departments with different classification levels could expose sensitive intelligence, such as the identity of an operative, to the wrong audience inside the Defense Department. He said broader internet leakage is easier to limit if the systems are built correctly, and noted that Palantir has already won contracts to support secure environments for asking models about classified topics without returning that information to Artificial Intelligence companies.

The Pentagon’s push follows a January memo from Defense Secretary Pete Hegseth and reflects a broader effort to bring more generative Artificial Intelligence into combat and administrative work. Current uses include ranking lists of targets, recommending strike priorities, and drafting contracts and reports. Potential future uses for models trained on classified material could include spotting subtle clues in imagery, linking new intelligence with historical context, and processing vast stores of text, audio, images, and video collected in many languages.

68

Impact Score

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.