[New publication] Autonomous weapons and the responsibility gap in the ICC statute

Published 4 May 2021

© Shutterstock

A new publication in the Journal of International Criminal Justice (Oxford Academic) by Asser and IHEID researcher Marta Bo tackles the difficulties of establishing responsibility for war crimes after the deployment of Artificial Intelligence (AI) in the military. Military AI, alone or in its interaction with human decision makers, brings about risks of harming civilians because of possible errors in the identification of targets. Bo’s article deals with the specific issue of the mens rea (the intention or knowledge of wrongdoing that constitutes part of a crime) required to establish responsibility for the war crime of indiscriminate attacks, in the context of attacks performed with semi-autonomous weapons or with the support of artificial intelligence (AI) in targeting decision-making. Instead of ‘piercing the fog of war’, military AI could thicken it further.

With the deployment of fully lethal autonomous weapons, fatal errors might translate into attacks against civilians, which can amount to war crimes. These attacks against civilians are 'unintentional', as they were not the target of the operation. In these scenarios, the human employing AI in the targeting process is taking a risk that civilians could be hit - however, war crimes committed during the conduct of hostilities are crimes of intent. This means that currently under international law, such actions might be difficult to prosecute, thus creating a responsibility gap. This responsibility gap underlying the interaction between humans machines is a key focal point of this article.

Responsibility gap
Bo suggests and justifies the inclusion of some forms of risk-taking behaviour in the intent requirement (mens rea) of conduct of hostilities war crimes as a way to expand the possibilities to prosecute war crimes committed via the use of military AI. For instance, an interpretation that would allow for risk-taking mental elements such as dolus eventualis (the awareness of the likely outcome of an action) and recklessness would be better able to capture the criminality of the conduct of the person who knowingly accepts the risk of killing civilians as part of an AI-powered attack.

Nonetheless, the article indicates that this construction can be employed only in specific circumstances, since in most scenarios even these lowered mens rea requirements would not be met. In most human-machine teaming scenarios, lower types of intent such as dolus eventualis could still be insufficient for the ascription of criminal responsibility for such indiscriminate attacks against civilians. This is because of the specific risks posed by the integration of autonomy in the targeting process and the resulting changes to the cognitive environment in which human agents operate, which significantly affect specific components of mens rea standards.

Marta Bo, Autonomous weapons and the responsibility gap in light of the mens rea of the war crime of attacking civilians in the ICC Statute. 

Read the full article

Dr Marta Bo is a researcher at the Asser Institute and the Graduate Institute for International and Development studies (Geneva). She is currently researching on criminal responsibility for war crimes committed with autonomous weapon systems (LAWS and War Crimes Project led by Prof. Paola Gaeta at the Graduate Institute)

Further reading
M. Bo, Meaningful human control over autonomous weapons systems: an (international) criminal law account - opinion juris available at http://opiniojuris.org/2020/12/18/meaningful-human-control-over-autonomous-weapon-systems-an-international-criminal-law-account/

Our DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. Find out more about upcoming DILEMA events here.

Interested in learning more about the increasing development and deployment of artificial intelligence in the defence and security sectors? Our Masterclass in law and ethics of artificial intelligence in defence and security provides an in-depth perspective on the complex legal and ethical challenges raised by AI in defence and security. Register here.