[Expert drafting] Rules of engagement for military artificial intelligence

On 6 and 7 November 2024, the Asser Institute is hosting an expert workshop on rules of engagement (ROE) for military artificial intelligence (AI) systems. As a tool used by commanders to provide clear instructions for military forces, could rules of engagement help ensure lawful application of military AI?  

Photo: Researcher Jonathan Kwik leading the workshop

Jonathan Workshop

Rules of engagement (ROE) are directives used by military authorities to ensure force and capabilities are properly utilised on the battlefield. These directives translate complex laws and policies into simpler instructions, specifying when and how force should be applied. Although ROE is a common feature of contemporary command and control (C2), few have considered whether they could control the use of artificial intelligence (AI) on the battlefield.

Given the straightforward yet flexible nature of ROE, they have the potential to ensure the lawful use of novel technologies and capabilities, including AI systems. The complexity of modern AI makes understanding its functions and risks challenging for unit commanders; therefore, a simple set of instructions like ROE could help AI users maintain control. Using ROE, military authorities could also limit the unpredictability of AI users’ behaviour, and avoid unwanted situations or excessive civilian casualties.  

About the workshop

On 6 and 7 November, the Asser Institute is organising a two-day interdisciplinary workshop to explore ROE as a tool for controlling the use of military AI. What is their viability, and what considerations could guide the drafting of ROE for military AI systems?  

This workshop will bring together fifteen experts from different countries and domains, including military personnel, technical specialists, and researchers.  Together they will draft an ROE for AI, based on a hypothetical case. Given that ROE are usually written by military commanders, it will be interesting to see one created through an interdisciplinary approach. The findings from this workshop – and potentially the AI-ROE framework drafted by the expert group – will be published in a later report.  

This workshop is an initiative by the Asser Institute’s DILEMA Project and ELSA Lab Defence Project, which both focus on the responsible use of AI in defence. It will be led by researchers Berenice Boutin and Jonathan Kwik, who are part of the Asser Institute research strand ‘Regulation in the public interest: Disruptive technologies in peace and security. This research strand focuses, in particular, on the development of the international regulatory framework for military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. 

About Berenice Boutin

Berenice Boutin is a Senior Researcher in International Law at the Asser Institute, Coordinator of the Research Strand on Disruptive Technologies in Peace and Security, and project leader of the NWO-funded project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA). Her research explores the mutual impacts between new technologies, such as artificial intelligence, and international law. This includes the role of international law in the governance and regulation of technologies, and the impact of new technologies on core notions and concepts of international law. 

About Jonathan Kwik

Jonathan Kwik is a researcher in international law at the Asser Institute attached to the ELSA Lab Defence project. His specialisation is in the laws governing the conduct of hostilities and artificial intelligence (AI). He currently sits as a member of the Board of Experts of the Asia-Pacific Journal of International Humanitarian Law (APJIHL) and he is an academic partner of the International Committee of the Red Cross (ICRC).