US Military Enters New Phase of Generative Artificial Intelligence Deployment

The US military is accelerating its use of generative Artificial Intelligence for surveillance and decision-making, raising critical questions about oversight and security.

The United States military has entered a significant new stage in its integration of generative Artificial Intelligence, expanding beyond earlier uses in computer vision to now employing sophisticated language models in operational settings. Recent exercises across the Pacific saw Marines using chatbot-style interfaces, comparable to ChatGPT, to analyze intelligence and identify threats more efficiently. This shift marks ´phase two´ of the Pentagon’s Artificial Intelligence adoption, driven by increasing calls for technological efficiency from both government officials and industry leaders such as Elon Musk and Secretary of Defense Pete Hegseth.

As this deployment broadens, experts voice concerns over the readiness of large language models to interpret complex and nuanced intelligence, especially under high-pressure or geopolitical circumstances. The new tools can extend beyond analysis to recommending specific courses of action, such as generating potential target lists. Supporters argue these systems promise greater accuracy and the potential to reduce civilian casualties, yet human rights advocates worry about reliability and the risk of Automated Intelligence assuming critical decision-making roles without adequate oversight.

Three primary challenges dominate current debates. First, the practical limits of keeping a ´human in the loop´ are increasingly apparent: with Artificial Intelligence now processing vast, interconnected data streams, it’s often unrealistic for humans to effectively audit its outputs. Second, generative Artificial Intelligence complicates military data classification; models skilled at compiling disparate open-source details can inadvertently reveal secrets, shifting old paradigms of information compartmentalization. Third, the military´s rapid-paced adoption mirrors consumer trends, raising questions about how far up the chain of command Artificial Intelligence should be empowered to go. Recent policy guidance aims to safeguard these advances, but looming debates under different administrations and the private sector’s growing influence—all point to a future in which Artificial Intelligence becomes central to the US military’s highest-level, most time-sensitive decisions.

88

Impact Score

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.