New publication on Article 36 and AI Decision-Support Systems

Published 12 April 2021

Project member Klaudia Klonowska published an article entitled ‘Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare’. The piece is forthcoming in the Yearbook of International Humanitarian Law (vol. 23), and available at SSRN as part of the Asser Research Paper Series.


The AI decision-support systems significantly impact the way states make decisions in warfare, conduct hostilities, and whether they comply with international humanitarian law. Decision-support systems, even if they do not autonomously execute targets, can play a critical role in the long chain of human-machine and machine-machine decision-making infrastructure, thus contributing to the co-production of hostilities.

Due to a lack of a definition of the treaty terms ‘weapons, means or methods of warfare’, it is unclear whether non-weaponised AI decision-support systems should be subjected to a legal review prescribed by article 36 of the Additional Protocol I. It remains a challenge to determine exactly what should be subjected to review beyond weapons.

This article suggests that based on the following four criteria it can be determined whether an item should be subjected to a legal review: (i) it poses a challenge to the application of international humanitarian law; (ii) it is integral to military decision-making; (iii) it has a significant impact on military operations; (iv) and it contributes to critical offensive capabilities. If an item meets all four criteria, it should not be deployed without the issue of legality being explored with care.

By applying the legal review to AI decision-support systems, states fulfil the duty to observe international humanitarian law in decision-making and mitigate risks to unlawful conduct in warfare. The author further promotes the conceptualization of article 36 as a review of technologies of warfare.