[New blog] Three lessons on the regulation of autonomous weapons systems to ensure accountability for violations of IHL

Published 2 March 2023

@Shutterstock -  MQ-9 reaper. Researchers Marta Bo and Vincent Boulanin offer three lessons on the regulation of autonomous weapons systems

In a new blog post for ICRC, researchers Marta Bo (Asser Institute, SIPRI) and Vincent Boulanin (SIPRI) offer three lessons on the regulation of autonomous weapons systems to ensure accountability for violations of international humanitarian law (IHL).

Ahead of a high-level UN meeting on autonomous weapons of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE) on March 6-10 March in Geneva, the authors urge to explore how accountability for IHL violations involving AWS would be ensured. They call this a ‘useful and much-needed exercise for the policy process’, as it provides a lens to explore what is, or should be, demanded, permitted, and prohibited in the development and use of AWS. 

Legal clarification
Bo and Boulanin call for legal clarification by the GGE and to elaborate on standards of intent, knowledge and behaviour that are demanded of the user(s) of autonomous weapons. For the legal framework governing accountability to be effectively triggered, the world needs to know what international humanitarian law permits, requires, and prohibits, they write. 

Since 2013, the risks posed by autonomous weapon systems have been the focus of intergovernmental discussions at the UN Convention on Certain Conventional Weapons (UN CCW). States still disagree on whether and how the development and use of autonomous weapon systems (AWS) should be (further) regulated. But states have recognised, among other principles, that human responsibility for decisions on the use of weapon systems must be retained, since accountability cannot be transferred to machines.  

Gap in the policy debate
As the authors write: “The question of what this principle entails is critical for the continuation of the policy process on AWS. To date, the expert debate has mainly elaborated on how human responsibility should be exercised – preventively – to ensure compliance with IHL. Less attention has been cast on how accountability would be ensured, in practice, in case of IHL violations involving AWS.” A major gap in the policy debate, according to the authors, ‘first and foremost because preventing and suppressing IHL violations is part of States’ obligations under the Geneva Conventions and customary law.

In their blog, Bo and Boulanin offer three lessons for the intergovernmental debate on the regulation of autonomous weapons systems. They are based on their recent report Retaining Human Responsibility in the Development and Use of Autonomous Weapons Systems: on Accountability for Violation of International Humanitarian Law involving AWS. 

Read the full blog.




[Spring academy] Artificial intelligence and international law
The Asser Institute’s Spring academy artificial Intelligence and international law, is an annual interdisciplinary programme offering in-depth perspectives on AI and international law. It addresses fundamental issues at the intersection of theory and practice. The programme will cover the technical aspects of AI, the philosophy and ethics of AI, human rights in relation to AI, AI in international humanitarian law, AI and international responsibility and international governance. The spring academy provides an ideal venue to help you understand these aspects of AI through a short interactive course with plenty of room for discussion with your fellow multidisciplinary participants. Read more.