[New publication] Military artificial intelligence and the principle of distinction: A state responsibility perspective

Published 9 December 2022

@Public Domain - Captain Daniel, US 867th attack squadron mq 9 reaper pilot. 

In a new article for the Israel Law Review (Cambridge University Press), researcher Magda Pacholska argues that the existing military AI technologies do not raise unique issues under the law of state responsibility. She argues that should the fully autonomous weapons - that is machine learning-based lethal systems capable of changing their own rules of operation - ever be fielded, they ought to be conceptualised as state agents, and treated akin to state organs.

Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages. But the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce.

Pacholska’s article argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility.

Her analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) - the principle of distinction - resulting from the use of AI-enabled military technologies.

Systemic shortcomings
The article reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder.

Pacholska reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL.

It is proposed, however, that should the so-called fully autonomous weapon systems - that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework - ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs. 

Read the full article (open access).

Dr Magda Pacholska LL.M. is a Marie Sklodowska-Curie Individual Postdoctoral Fellow working on the project entitled “Implementing International Responsibility for AI in Military Practice” within the DILEMA project (Designing International Law and Ethics into Military Artificial Intelligence). Before joining the Asser Institute, Magda worked for two years as a legal adviser at the Polish General Command of the Armed Forces, where she focused on the legal aspects of interoperability in joint operations. 

[Spring academy] Artificial intelligence and international law | 27-31 March 2023

The Spring academy artificial Intelligence and international law, is an annual interdisciplinary programme offering in-depth perspectives on AI and international law. It addresses fundamental issues at the intersection of theory and practice. The programme will cover the technical aspects of AI, the philosophy and ethics of AI, human rights in relation to AI, AI in international humanitarian law, AI and international responsibility and international governance. The spring academy provides an ideal venue to help you understand these aspects of AI through a short interactive course with plenty of room for discussion with your fellow multidisciplinary participants. Read more.

Read more
State responsibility in relation to military applications of artificial intelligence
Asser Institute senior researcher Dr Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.

Retaining human responsibility in the development and use of autonomous weapon systems: On accountability for violations of international humanitarian law involving AWS
In a report for the Stockholm International Peace Research Institute (SIPRI), Asser Institute researcher Marta Bo (together with Laura Bruun and Vincent Boulanin) tackle how humans can be held responsible for violations of international humanitarian law involving autonomous weapons systems.

In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of attention to other military applications of artificial intelligence, such as the use of data-driven algorithms to assist with target recognition, decision-making aids, for military tasking and to support intelligence, surveillance and reconnaissance. According to Woodcock, we need to fully understand the effects of these technologies on human decision-making processes prior to the deployment of these applications.