Helen Warrell and James O’Donnell open a debate about how Artificial Intelligence is reshaping conflict, starting with a near-future scenario in which autonomous drones, Artificial Intelligence targeting, coordinated cyberattacks and large disinformation operations combine to overwhelm defenses. Military leaders see digitally enhanced forces as faster and more accurate, while critics warn that increased reliance on Artificial Intelligence risks rapid escalation and loss of human control. Prominent voices cited include Henry Kissinger, who warned of the dangers of Artificial Intelligence-driven warfare, and António Guterres, who has called for a ban on fully autonomous lethal weapons systems. Researchers at Harvard’s Belfer Center and analysts such as Anthony King of the University of Exeter argue that fully automating war is unlikely and that Artificial Intelligence will more likely augment human insight than replace it, as King puts it, “the complete automation of war itself is simply an illusion.”
The conversation lays out current military use cases for Artificial Intelligence, none of which involve full autonomy. The article names planning and logistics, cyber warfare in sabotage and espionage, information operations, and weapons targeting as primary applications. Combat examples include Kyiv’s use of Artificial Intelligence to direct drones that evade jammers and the Israel Defense Forces’ Lavender decision support system, which reportedly helped identify around 37,000 potential human targets in Gaza. Contributors note the risk that systems like Lavender can replicate training-data biases while acknowledging that human personnel also carry biases. Former UK officer Keith Dear of Cassi AI argues existing laws can govern deployment provided humans remain responsible for decisions.
James O’Donnell traces a shift in tech company behavior, noting that OpenAI moved from forbidding military use of its tools early in 2024 to partnering with Anduril on counterdrone work by year end. He highlights hype and money as drivers, with the Pentagon and European defense buyers offering deep funding and venture capital for defense tech rising sharply. Critics fall into two camps: those who question whether more precise targeting reduces overall casualties, citing early drone campaigns in Afghanistan, and those who point to technical limitations. Missy Cummings is cited warning that large language models can make serious errors and that single human checks may not suffice when models rely on thousands of inputs. O’Donnell calls for greater skepticism toward bold promises about battlefield Artificial Intelligence.
Warrell concludes that scrutiny of safety, oversight and political accountability must continue, while also urging skepticism about exaggerated claims of military capability. She warns that the speed and secrecy of an arms race in Artificial Intelligence weapons risk depriving these systems of the public debate and scrutiny they require.
