[International Humanitarian Law] Taylor Woodcock: ‘We should focus on the effects of decision-making aids, tasking, intelligence, surveillance and reconnaissance technology in warfare’

Published 11 October 2022

@Picryl - US military defense operator. 

In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of attention to other military applications of artificial intelligence, such as the use of data-driven algorithms to assist with target recognition, decision-making aids, for military tasking and to support intelligence, surveillance and reconnaissance. According to Woodcock, we need to fully understand the effects of these technologies on human decision-making processes prior to the deployment of these applications.

In the podcast, Woodcock pushes forward today’s debates by discussing the overly narrow focus on the implications of the use of AI in military applications. As the barriers to the integration of other applications of AI into the military may be lower than for autonomous weapons systems (AWS), it is an extremely pressing question to ask how these other applications may impact compliance with international law. Woodcock’s research thus considers how the current state of the art in data-driven models, such as machine learning, may be potentially used in the military domain.

Additionally, Woodcock discusses the lack of focus on how using AI technology in warfare, can alter the human agency of those interacting with it and can hinder the ability to ascribe responsibility in international law. She demonstrates that these two key elements of international legal frameworks - agency and responsibility - need further development in research.

Meaningful human control (MHC)
In the podcast, Woodcock further explains the notion of meaningful human control (MHC) as capturing an intuitive recognition that a human element should be maintained when utilising AI technology in warfare. She further highlights that this concept should be refined and operationalised to advance debates about the use of artificial intelligence (AI) by the military.

Looking to the future
Concluding the podcast, Woodcock sets the path for future research on the implications of AI technology in armed conflict. According to her, attention must be shifted towards previously underexamined AI-enabled technologies, especially decision-making aids. These applications in particular may impact the way that military practitioners engage in reasoning and decision-making, and thus how they exercise agency in military operations. Understanding the relationship between international law and the effective exercise of human agency is a necessary step to considering whether we can account for international legal obligations in the design of military AI.

Read more
[New publication] State responsibility in relation to military applications of artificial intelligence
In a new paper, Asser Institute senior researcher Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.

[Research paper] In or out of control? Criminal responsibility of programmers of autonomous vehicles and autonomous weapon systems
In a new paper, Asser Institute researcher Marta Bo examines when programmers may be held criminally responsible for harms caused by self-driving cars and autonomous weapons.

About the author
Taylor Woodcock is an international law researcher at the Asser Institute, working on the research strand: Regulation in the public interest: Disruptive technologies in peace and security. This research strand addresses regulation to safeguard and promote public interests. It focuses, in particular, on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this strand.

Listen to the full podcast on On AiR: IR in the age of AI. 

Save the date 
Save the date for our upcoming DILEMA Lecture on November 17 - 17.00 hrs CET. Subscribe now to our bi-weekly Education & Events newsletter to learn more and to save your seat.