Can ChatGPT solve legal dilemmas related to the military use of artificial intelligence?

Published 7 August 2023

@iStock

Large language models (LLMs) like ChatGPT are capable of generating text on an endless range of topics. In a new video, Taylor Woodcock asks ChatGPT to discuss some of the most important legal issues surrounding the use of artificial intelligence (AI) in the military. Can ChatGPT solve these dilemmas? 

Taylor Woodcock is a researcher in the Designing International Law and Ethics into Military Artificial Intelligence (DILEMA) project, which explores the legal aspects of the use of military applications of AI. While autonomous weapons systems often grab the headlines when it comes to military AI, software-based applications of AI can also have a major impact on the military. These applications can range from AI-powered decision-support systems for intelligence and targeting to coordination and planning tools that are more similar to ChatGPT than autonomous weapons. 

In the video, Woodcock asks ChatGPT to evaluate conflict scenarios related to the use of autonomous weapons. The scenarios were developed by researcher Magdalena Pacholska for her article "Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective," published in the Israel Law Review. How well can ChatGPT interpret who would be legally responsible in the scenarios where an autonomous weapon fires at civilians? 

Responsibility of human commanders under International Humanitarian Law 
While ChatGPT is able to list many of the principles of international humanitarian law (IHL), it has difficulty with the nuances and interpretation of the law. "ChatGPT is asking whether a system itself can fulfill the principles of distinction and proportionality, and precautions under IHL," says Woodcock. "But it is not really capturing that this is the responsibility of humans, and that this responsibility might shift to an earlier moment. We have to ask if the use of these systems puts the human commanders in a position to fulfill their obligations." 

As a Large Language Model ChatGPT returns plausible-sounding answers, but they are not necessarily grounded in the same reasoning we would expect from a human. So, it can list aspects of international humanitarian law, but it cannot interpret or apply them. This highlights the relevance of the discussion on "meaningful human control and human agency". While an AI system may be able to supply us with plenty of information, it still needs a human to make sure that the system is doing what it should be doing. 

In conclusion, ChatGPT is not yet capable of solving the legal dilemmas related to the military use of AI. These dilemmas are complex and require human judgment and understanding. While AI can be a valuable tool for helping us to understand these dilemmas, it cannot replace the human role in making decisions about the use of AI in the military. 

About Taylor Woodcock     
Taylor Woodcock is a researcher in public international law at the Asser Institute, while pursuing her PhD at the University of Amsterdam. She works on the research strandRegulation in the public interest: Disruptive technologies in peace and security, which addresses regulation to safeguard and promote public interests. It focuses on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. 

About the DILEMA project 
The DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. It pays particular attention to the ways in which AI can affect or reduce human agency, and how compliance with international law and accountability can be achieved by design.  

Innovative research in the field of military AI 
On 12–13 October 2023, the DILEMA Project is organising a free conference featuring some of the latest research insights related to theoretical and practical questions of military AI from the fields of law, ethics, computer science, and other disciplines. Read more. 

Read more  
Researchers Berenice Boutin and Taylor Woodcock propose ways to operationalise ‘meaningful human control’ through a legal ‘compliance by design’ approach in ‘Aspects of Realizing (Meaningful) Human Control: A Legal Perspective’ published in R. Geiß, and H. Lahmann's Research Handbook on Warfare and Artificial Intelligence.  

Magdalena Pacholska dissects state responsibility and the principle of distinction in her Israel Law Review article ‘Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective.’ The full scenario used in the video is available in this publication. 

The ‘Autonomous weapons’ book chapter by Magdalena Pacholska is designed as a cheat sheet for both experts and the general public looking for an overview of this increasingly complicated debate.