[New publication] State responsibility in relation to military applications of artificial intelligencePublished 13 September 2022
In a new paper, Asser Institute senior researcher Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.
While the question of how to attribute and allocate responsibility for wrongful conduct is one of the central contemporary challenges of AI, the perspective of state responsibility under international law remains relatively underexplored.
Moreover, most scholarly and policy debates have focused on questions raised by autonomous weapons systems (AWS), without paying significant attention to issues raised by other potential applications of AI in the military domain.
Boutin’s article provides a comprehensive analysis of state responsibility in relation to military AI. It discusses state responsibility for the wrongful use of AI-enabled military technologies and the question of attribution of conduct, as well as state responsibility prior to deployment, for failure to ensure compliance of AI systems with international law at the stages of development or acquisition. Further, it analyses derived state responsibility, which may arise in relation to the conduct of other states or private actors.
Read the full paper.
State responsibility in relation to military applications of artificial intelligence by Bérénice Boutin, forthcoming in the Leiden Journal of International Law.
About Bérénice Boutin
Dr Bérénice Boutin is senior researcher in International Law at the Asser Institute. She coordinates the research strand on Disruptive technologies in peace and security, and is project leader of the NWO-funded project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA). This project explores interdisciplinary perspectives on military AI, with a focus on legal, ethical, and technical perspectives on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance and accountability by design.
Aspects of realizing (meaningful) human control: A legal perspective.
In this paper, Asser researchers Berenice Boutin and Taylor Woodcock explain and problematise reliance on the concept of meaningful human control (MHC) in debates on autonomous weapon systems and military AI more broadly. The authors propose a legal compliance-by-design approach to refine and operationalise the concept of MHC so that it may support the international legal framework in addressing the complex realities of technologically-mediated warfare.
Designing International Humanitarian Law into Military Autonomous Devices
In the paper ‘Designing International Humanitarian Law into Military Autonomous Devices’ Tomasz Zurek, Jonathan Kwik and Tom van Engers propose a hypothetical system for implementing the rules of International Humanitarian Law (IHL) that is constructed from the ground up with international humanitarian law rules in mind.