[New publication] Emerging technologies of warfare and the need to review their legality

Published 14 April 2021

Shutterstock

A new publication by Asser researcher Klaudia Klonowska addresses the question of whether international humanitarian law article 36 legal review governs the use of artificial intelligence (AI) decision-support systems. The article suggests four criteria for determining whether emerging technologies of warfare should be subjected to a legal review. Such a classification is crucial as without it, states may fail to fulfil their duty to observe international humanitarian law in decision-making, and the risk of unlawful conduct in warfare will remain high.

Not a weapon
International humanitarian law has had a strong focus on attempting to govern decision-making systems, such as lethal autonomous weapons systems that are programmed to execute targets. Significantly less attention has been paid to decision-supporting systems. This type of AI is used to process, filter, analyse data and solve certain military decision tasks. This article argues that decision-support systems can still play a critical role in the long chain of human-machine and machine-machine decision-making infrastructure, and thus contribute to the co-production of hostilities. It should therefore be considered to fall within the meaning of ‘weapon, means or method of warfare’ under article 36 of the Additional Protocol I to the Geneva Convention and be subjected to a review of its legality.

Four criteria
The four criteria outlined by this publication that determine whether emerging technologies of warfare should be subjected to a legal review are that: (i) it poses a challenge to the application of international humanitarian law; (ii) it is integral to military decision-making; (iii) it has a significant impact on military operations; (iv) and it contributes to critical offensive capabilities. If an item meets all four criteria, it should not be deployed without the issue of legality being explored with care.

Read the full article

Asser and artificial intelligence
Ensuring that the rule of law and ethical values are respected when using AI in the military is at the core of the DILEMA project. In this interdisciplinary project, researchers work together to address the ethical, legal, and technical dimensions of military AI. With their policy recommendations for national and international institutions, they seek to ensure that technologies developed to assist in decision-making can remain accountable and under a level of human control. In addition to contributing to the policy and decision-making, the project organises a variety of events such as lectures as well as Masterclasses for practitioners, students and those generally interested in the field of military AI.

Klaudia Klonowska is a junior researcher in the Asser research strand Human Dignity and Human Security in International and European Law and a researcher within the context of the DILEMA project. Her research lies at the nexus of military technologies and international humanitarian law. Additionally, Klaudia coordinates two Global Counter-Terrorism Forum initiatives on maritime security and terrorist travel, and the terrorist watchlisting practices.