[New Publication] Performance or explainability? A law of war perspective 

Published 8 April 2024

Photo: US Air Force/Airman Kai White | Wikimedia Commons

In their publication for the Law, Governance, and Technology book series, Jonathan Kwik and Tom van Engers address the trade-off between explainability and performance in artificial intelligence-enabled weapons systems from a legal perspective. They introduce a general guideline for decision-makers, derived from existing international humanitarian law principles, on striking a balance in a way that complies with international law. 

A UN report on the Libyan civil war suggests that an autonomous STM Kargu drone system may have been involved in an attack where 32 students were killed, and The Guardian reports that the Israeli Defense Forces uses artificial intelligence (AI) to select bombing targets in Gaza. How to apply international humanitarian law (IHL) becomes increasingly important with this increased use of AI in the military context. IHL was created before the rapid adoption of AI in the military, meaning that there are no specific laws to clarify who can be held accountable when AI systems are used in conflictKwik and van Engers interpret existing IHL principles to address this issue. 

Legal accountability 
The notions of expectancy and foreseeability play a significant role in determining the legality of using a weapon under IHL, particularly when it comes to assigning blame. Kwik and van Engers (2021) highlight the difficulty of holding individuals legally accountable for outcomes they could not have reasonably foreseen, which becomes problematic when unexplainable AI models are being used in autonomous weapons. The ability to understand why an algorithm behaved in a particular way, also referred to as AI explainability, is important in establishing legal accountability when AI weapons are deployed, because if something goes wrong when using unexplainable AI, this error could be considered unforeseeable. 

The more complex an algorithm is, the more difficult it becomes for humans to understand how or why it is producing a certain output, obscuring the way the data is computed within a ‘black box’. Using explainable AI in weapons can bring a much-needed degree of clarity from a legal perspective: the user can be held accountable if it is evident they understood how the AI will behave when the weapon is deployed. 

A trade-off between explainability and performance 
The initial solution may seem to simply develop and implement explainable AI, but this can be problematic when transparency comes at the cost of performance. To have more transparent or explainable AI, the algorithm is typically simpler so that humans can understand it, and so the AI has less predictive power.  

The explainability-performance trade-off is particularly crucial for AI weapons systems as lower performance means weapons have lower reliability. Lower reliability increases the errors the AI system may make, but lower explainability makes it more difficult to hold users legally accountable when something goes wrong. 

In their article, Kwik and van Engers interpret established IHL principles, such as the prohibition on indiscriminate weapons and the duty to minimise risk to the civilian population, to propose a general guideline for decision-makers to strike a balance between performance and explainability in a way that aligns with international law.  

Read the full article here.  

Read more 
Jonathan Kwik has recently defended his PhD dissertation (cum laude) entitled: ‘Lawfully using autonomous weapon technologies: A theoretical and operational perspective’, which has garnered significant media attention and sparked debate on AI weapons. Jonathan Kwik: “What is often missed by jurists is the factual, concrete understanding of what technology can do and what its limitations are”.  An interview.  Read more 

Upcoming spring academy: Artificial intelligence and international law – Last call! 
Are you interested in all things AI? From 22 -26 April 2024, the Asser Institute will host its 6th annual spring academy on ‘Artificial intelligence and international law.’ This interdisciplinary training programme offers you an in-depth and comprehensive overview of AI and international law. It will address both the technical and legal aspects of AI, so whether you are a lawyer or a programmer, this academy will offer you the skills and knowledge to advance in your professional or academic career. Seats are limited, so make sure to book your seat now. Read more. 

 


Dr Jonathan Kwik