Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

Artificial Intelligence is moving deeper into national security, defense, and warfare even though the technology remains difficult to interpret, susceptible to hallucinations and bias, and only partially governed by clear rules. Artificial Intelligence-powered systems have been deployed in Ukraine’s defense efforts and assisted the Pentagon in capturing former Venezuelan President Nicolás Maduro. A dispute over Pentagon contracting and restrictions on domestic surveillance and fully autonomous weapons has sharpened a broader debate over who should set the boundaries for military use, how those boundaries should be enforced, and whether government power or corporate discretion poses the greater risk.

One view holds that military use of Artificial Intelligence is necessary if the U.S. is to keep pace with adversaries, but procurement and strategy should be set by elected government rather than private firms. That position argues that commercial model pluralism is healthy, that companies should not be punished for holding different values, and that key terms such as “mass surveillance,” “fully autonomous,” and “human-in-the-loop” remain too vague for sound policy. Existing regulations were built for a time when human analysts could review only a tiny share of intelligence, and the ability of Artificial Intelligence to process far more data raises fresh questions about fairness, accuracy, accountability, and the risk of false targeting.

Another view argues that unelected companies should not dictate defense policy and that ethical guardrails must be weighed against the state’s obligation to protect citizens. That perspective contends that Pentagon activities are directed abroad and constrained by law, oversight, and safeguards, and warns that inflexible rules requiring humans in the loop could leave the U.S. at a disadvantage in defensive combat scenarios. It also argues that allowing a contractor to veto military uses of its products would create an unworkable precedent for national defense.

A broader civic critique frames the conflict as a struggle over democratic control of a powerful technology. It warns against both corporate gatekeeping and unilateral executive pressure, including threats to seize or blacklist technologies that conflict with current policy preferences. That argument calls for a whole-of-society approach involving civil society, technologists, academia, philanthropy, and all branches of government. It also points to Stanford HAI’s Ethics & Society Review as a model for anticipating societal harms early, while questioning how much such processes can matter if presidential power or industry influence ultimately overrides deliberation.

Privacy concerns form a separate but related fault line. The ability of large language models to absorb and reason over enormous volumes of personal, public, and semi-public data could dramatically lower the barrier to profiling and surveillance. The federal government already buys commercially gathered data, and recent actions by ICE against protestors in Minnesota and other states, as well as targeted removals of immigrants by the agency, have demonstrated how the Trump administration is using this data. Senator Ron Wyden of Oregon recently introduced a bipartisan bill to limit the government’s purchase of brokered data for domestic intelligence. In that context, company guardrails are one of the few practical restraints on surveillance uses, even as firms such as Palantir continue providing government-focused data intelligence services.

Biosecurity adds another layer of concern. Artificial Intelligence systems that help design drugs can also help design toxins, and large language models have widened access to knowledge that could be misused by bad actors. Continued publication of research is defended as the best way to test and improve capabilities openly, but that openness depends on safeguards at synthesis companies and among suppliers of specialized reagents. Screening systems, registries, and customer vetting are described as key choke points. Researchers have also proposed four tiers for handling sensitive data, and there is strong support for minimum performance standards, civilian control, and policy commitments modeled on earlier nuclear governance debates.

55

Impact Score

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.