‘Eclipsing Human Rights: Why the International Regulation of Military AI is not Limited to International Humanitarian Law’

Published 19 July 2021

In a recent blog post, PhD researcher and DILEMA team member Taylor Woodcock explores the relevance of international human rights law to military applications of artificial intelligence (AI). The full text of the blog post can be found here.


This blog post calls for more rigorous engagement with the full international legal framework that imposes obligations on States that develop, acquire and use military applications of artificial intelligence (AI). The current emphasis on international humanitarian law (IHL) only partially accounts for the applicable legal framework and obscures the relevance of international human rights law (IHRL) in this context. There are a number of open questions as to the interplay between these two bodies of law both within and outside of active hostilities on the battlefield and what role human rights can play for the regulation of military applications of AI. Procedural human rights duties in particular remain under-explored in debates on military AI. This post highlights the potential of inherently opaque AI-enabled technologies, for instance those harnessing machine learning techniques, to impede the ability of States to fulfill the obligation to carry out effective investigations under both IHRL and IHL. It further suggests that greater focus on the pre-deployment stages and the duties on States under the business and human rights framework to regulate the corporations driving developments in AI technologies will be key in advancing debates.

See the Human Rights Here blog for more.