[New publication] Rhetoric and regulation: The (limits of) human/AI comparison in debates on military artificial intelligence
Published 22 September 2025
Photo: @Shutterstock
The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of military targeting.
These days, legal and policy debates about military AI are full of comparisons between AI and human performance. Proponents claim that AI's superior speed, accuracy, and certainty improve adherence to International Humanitarian Law (IHL). This supposed AI superiority feeds hopes of overcoming human flaws in warfare through progress and rationalisation. The narrative also helps states in justifying major investments in military AI.
In their new chapter titled “Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI” researchers Klaudia Klonowska and Taylor Kate Woodcock (Asser Institute) unpack and critique the prevalence of comparisons between humans and AI systems, including in analyses of the fulfilment of legal obligations under IHL.
Binary framing
The authors challenge the often binary framing by highlighting misleading assumptions that neglect how the use of AI results in complex human-machine interactions that transform targeting practices. They unpack what is meant by ‘better performance’, demonstrating how prevailing metrics for speed and accuracy can create misleading expectations around the use of AI given the realities of warfare. They conclude that holistic but granular attention must be paid to the landscape of human-machine interactions to understand how the use of AI impacts compliance with targeting obligations grounded in IHL.
Read the full chapter
About Klaudia Klonowska
Klaudia Klonowska is a postdoctoral researcher at Sciences Po Paris and managing director of the West Point Manual on Artificial Intelligence. Her PhD thesis, titled ‘Techno-Legal Tinkering: AI Decision-Support Systems, Human-Machine Interaction, and International Humanitarian Law’ was completed at the Asser Institute and the University of Amsterdam as part of the Designing International Law and Ethics into Military Artificial Intelligence (DILEMA) project funded by the Dutch Research Council (NWO).
About Taylor Kate Woodcock
Taylor Kate Woodcock is a researcher at the Asser Institute in the research strand on Disruptive Technologies in Peace and Security. Her research examines the implications of the development and use of military applications of artificial intelligence (AI) for international law, with a specific emphasis on IHL. Her PhD thesis, entitled ‘Human-Machine (Learning) Interactions: War and Law in the AI Era’ and completed as part of the Designing International Law and Ethics into Military Artificial Intelligence (DILEMA) project funded by the Dutch Research Council (NWO).
Read more
[New publication] “Digital yes-men: How to deal with sycophantic military AI?”In a new publication, researcher Jonathan Kwik (Asser Institute) examines sycophantic military AI assistants. He explores the reasons behind ‘bootlicking’ behaviour in AI systems, highlights the significant battlefield dangers it presents, and proposes a two-part strategy comprising improved design and enhanced training to mitigate these risks for military forces. Read more.
[Op-ed] Scholars warn for silent revolution in warfare driven by AI-powered decision systems
Researchers Marta Bo (Asser Institute) and Jessica Dorsey (Utrecht University) have published a critical op-ed in Dutch quality paper NRC Handelsblad, shedding light on a silent revolution in modern warfare. Their piece, titled "Er is een stille revolutie gaande in de manier waarop strijdkrachten beslissingen nemen in oorlogstijd," highlights the rapidly increasing use of AI-based Decision Support Systems (AI-DSS) in military operations. Read more.
[Interview] Google skips due diligence for cloud services to Israel
A new story published in The Intercept reveals that tech company Google had serious concerns about providing state-of-the-art cloud and machine-learning services to Israel. The piece quotes Asser Institute researcher León Castellanos-Jankiewicz weighing on Google’s contractual inability to conduct proper risk assessments. Read more.

