[New Publication] Is wearing sunglasses an attack? Obligations under international humanitarian law for anti-AI countermeasures

Published 4 March 2024

@Shutterstock

In a forthcoming article for the International Review of the Red Cross, researcher Jonathan Kwik addresses the legal ambiguity of adversarials, an anti-artificial intelligence countermeasure that can induce hallucinations, bias, and performance drops in an opponents’ AI systems. Kwik proposes a cognitive framework that builds on existing international humanitarian law to answer the question of what can be considered an attack when adversarials are used, and who can be held responsible in that case. 

All weapons have countermeasures, and autonomous weapons, weapons that can select and engage targets without further intervention by humans, using artificial intelligence (AI) are no different. In 2017, researchers 3D-printed a seemingly ordinary turtle. However, when photos of the object were fed into Google’s image classifier, it predicted that the image was of a rifle, with approximately ninety percent certainty. This was an example of an AI adversarial, an undetectable pattern that could fool an otherwise well-performing algorithm into consistently misclassifying the object as something else.  

While belligerents can use adversarials to ‘trick’ their opponent’s AI into classifying a soldier as a civilian, they can also be used to cause immense civilian casualties – by misclassifying a hospital as a target, for example. This potential brings into question the extent to which adversarials can be considered as a lawful countermeasure under international law. 

Countermeasures are not prohibited by international humanitarian law (IHL) in principle; in fact many are permitted under the category of ruses, provided they do not break an IHL rule. IHL offers certain protections through its ‘obligations in attack’, but it can be unclear who these obligations apply to under contested circumstances.  

If an opponent is able to wrest control from a system user (e.g.: through hacking), then they can be considered the attacker and are responsible. However, the case is more ambiguous with adversarials, as it is unclear who is ‘in control’ of a weapon when a misclassification occurs. This carries the risk of neither party upholding their obligations under IHL. Considering the rapid development and expanding use of military AI, addressing this issue is critically important. 

Vulnerabilities in modern AI 
To implement existing laws to new technologies like AI adversarials, researcher Jonathan Kwik highlights the importance of first understanding the technical aspects behind them. A particular vulnerability for modern AI stems from the reliance on datasets for machine learning (ML) algorithms, which are often used in military AI systems.  

The quality of the AI is dependent on the quality of the dataset it learns from, which can be problematic when access to relevant datasets is limited. As a result, some developers use open-source data without verifying its safety, and those using raw data that needs to be pre-labelled can sometimes outsource the process. These methods leave AI vulnerable to exposure.  

Opponents can exploit this vulnerability to intentionally trick AI through adversarial inputs and poisoning attacks, which introduce triggers into the AI that go undetected by the system users – until the AI encounters these triggers and behaves unexpectedly. These triggers can be anything, from a certain configuration of pixels to a person wearing sunglasses. 

Is wearing sunglasses an attack? 
Given the diversity of adversarials and their tactical uses, it is clear that nuanced assessment is necessary when implementing IHL. Kwik argues for an approach that uses the foreseeability of harmful consequences to determine whether an adversarial is an attack, and whether the responsibility remains with the original system user or transfers to the adversarial’s author. He presents a legal methodology which consistently categorises different adversarials on this basis, to answer the question:When is using an adversarial an attack? 
Read the full article. 

Read more 
Jonathan Kwik has recently defended his PhD dissertation (cum laude) entitled: ‘Lawfully using autonomous weapon technologies: A theoretical and operational perspective’, which has garnered significant media attention and sparked debate on AI weapons. Jonathan Kwik: “What is often missed by jurists is the factual, concrete understanding of what technology can do and what its limitations are”.  An interview.Read more. 

Upcoming spring academy: Artificial intelligence and international law  
Are you interested in all things AI? From 22 -26 April 2024, the Asser Institute will host its 6th annual spring academy on ‘Artificial intelligence and international law.’ This interdisciplinary training programme offers you an in-depth and comprehensive overview of AI and international law. It will address both the technical and legal aspects of AI, so whether you are a lawyer or a programmer, this academy will offer you the skills and knowledge to advance in your professional or academic career. Seats are limited, so make sure to book your seatnow. Read more.   

[Annual Lecture 2024] ‘Connection in a divided world: Rethinking ‘community’ in international law’ by Fleur Johns 
On April 25, professor Fleur Johns, a recognised expert on international law and on the role of automation and digital technology in global legal relations, will deliver the 9th Annual T.M.C. Asser Lecture in the Peace Palace in The Hague. She will explore the concept of ‘community’ in today's international law, especially in the context of humanitarianism. As technology has radically changed the ways in which we connect, communicate, share values with each other, exercise power, and engage in conflict, the concept of ‘community’ in international law is once more in contention. Register now, it is free. Read more. 


Dr Jonathan Kwik