[New publication] ‘Iterative assessment’: A framework for safer military AI systems?

Published 10 July 2025

@shutterstock-241361915

In a new publication, researcher Jonathan Kwik (Asser Institute) proposes an innovative approach to reduce civilian harm from successive deployments of military artificial intelligence (AI) systems. The chapter offers practical guidance for militaries committed to responsible AI use, providing a roadmap based on International Humanitarian Law (IHL) principles that can be used for protecting civilian populations while maintaining operational effectiveness.

As AI becomes increasingly integrated into military operations —from autonomous weapons to decision-support systems—armed forces face heightened uncertainty, unpredictable risks, and system vulnerabilities.  

AI systems are often used in complex situations on the battlefield, where many risks and vulnerabilities only become apparent after deployment. Even AI operators acting in good faith may face situations in which unforeseeable civilian harm occurs, despite rigorous review and careful deployment.  

Unintended civilian casualties are often dismissed as unavoidable ‘accidents of war’. Kwik's new research, titled “Iterative assessment for military artificial intelligence (AI) systems, challenges this assumption. While acknowledging that some AI failures may be initially unforeseeable, he argues that their recurrence can be significantly reduced through systematic learning and adaptation. 

Structured approach 
In his chapter, Kwik introduces an Iterative Assessment framework comprising two key mechanisms: Iterative Review and Iterative Assessment in Deployment. This structured approach enables military decision-makers to systematically capture insights from real-world AI performance, identify previously unknown risks, and update their operational procedures accordingly. 

The framework is grounded in the principles of international humanitarian law, recognising that while initial failures may be unavoidable, military forces still have a duty to prevent repeat harm once risks become known and foreseeable 

Rather than accepting repeated incidents as inevitable, this iterative approach could help transform post-deployment evaluations into best practices for managing AI-induced uncertainty and minimising civilian harm from the use of military AI.  

Read the full chapter 

About Jonathan Kwik 
Dr Jonathan Kwik is a researcher in international law at the Asser Institute. He specialises in techno-legal research on the military use of artificial intelligence (AI) related to weapons, the conduct of hostilities, and operational decision-making. He obtained his doctorate (cum laude) from the University of Amsterdam on the lawful use of AI-embedded weapons at the operational level. He recently published the book, Lawfully Using Autonomous Weapon Technologies. Jonathan is part of the research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’, which addresses regulation to safeguard and promote public interests. It focuses on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this strand. 

Read more 
[Policy brief] Can Rules of Engagement and military directives effectively control military AI? 
As Artificial Intelligence (AI) continues to reshape modern warfare, the need for effective control over military AI systems has become increasingly urgent. Insights from a recent expert workshop, led by researcher Jonathan Kwik and colleagues, underscore the need for strategic, flexible, and context-specific AI guidelines. Read more.   

[Interview] Jonathan Kwik: "I am bridging the gap between the technical and the legal domains" 
Researcher Jonathan Kwik specialises in the laws regulating conduct of hostilities and artificial intelligence (AI). We interviewed him about his PhD dissertation entitled: ‘Lawfully using autonomous weapon technologies: A theoretical and operational perspective’. Jonathan Kwik: “What is often missed by jurists is the factual, concrete understanding of what technology can do and what its limitations are”. An interview. Read more.  

[New publication] Balancing military and humanitarian interests: Scaling the scope of autonomous weapon attacks 
In a new publication, researcher Jonathan Kwik proposes a scaling methodology to help characterise attacks by autonomous weapon systems (AWS). This could provide greater clarity on the legality of such attacks under international law, benefiting both civilians and belligerents. Read more.