[Blog post] The ‘need’ for speed – The cost of unregulated AI-Decision Support Systems to civilians

Published 18 April 2024

@Wikicommons | Ruins in the Gaza Strip

In their recently published blog piece for Opinio Juris, Marta Bo (Asser Institute) and Jessica Dorsey (Utrecht University) criticise the lack of regulation for military use of AI-enabled decision-support systems (AI-DSS). These AI-enabled systems are being implemented by militaries at an alarming speed and scale, including in ongoing conflicts in Ukraine, Yemen, Iraq, Syria, and Gaza. The authors call for more scrutiny around AI-DSS usage. 

Earlier this month, the Guardian reported the Israeli Defence Forces’ (IDF) use of previously undisclosed AI systems to generate and track human targets in Gaza. In fact, military use of AI-enabled decision-support systems (AI-DSS) in ongoing conflicts is increasing, as seen in Ukraine, Gaza, Yemen, and Syria. Bo and Dorsey emphasise the danger of allowing these systems to persist with minimal regulation, particularly from an international legal perspective.  

There are various AI systems and algorithms that fall under the umbrella of AI-DSS, but in a military context, their purpose is to ‘assist’ humans in finding and following targets. AI-DSS are considered useful in increasing the efficiency of bombing attacks, a factor which Bo and Dorsey argue is behind why the use of these AI-enabled systems is on the rise.  

AI-DSS are not technically autonomous –they merely generate targets rather than launch attacks– and so they have received comparatively less attention than autonomous weapons systems. This partly stems from the belief that human decision-making still plays a big role when launching attacks involving AI-DSS, but Bo and Dorsey challenge this assumption. They delve into the numerous factors that severely limit human decision-making in practice, including cognitive biases that overly trust decisions made by machines particularly in high-pressure situations. 

Legal implications of implementing AI-DSS  
Considering the limited input of human judgement when AI-enabled targeting systems are in play, a major concern that comes to the fore is militaries’ adherence to international humanitarian law (IHL). For example, attacking parties have an obligation to do everything feasible to verify’ their targets are of a military nature under the principle of distinction.  

Bo and Dorsey question the extent to which attacks are in line with IHL principles, not only due to the dubious level of human involvement in decision-making but also based on the sheer speed at which these attacks are carried out after targets are nominated by the AI-DSS. 

They explain the risk of relying on AI-DSS, which are not always accurate and not particularly suited for complex urban settings like Gaza. As civilian casualties increase daily, more attention must be paid to how AI-DSS are being integrated into conflicts.  

You can read the full blog post here. 

Want to learn more about AI? Last chance to register! 
From 22 -26 April 2024, the Asser Institute will host its 6th annual spring academy on ‘Artificial intelligence and international law.’ This interdisciplinary training programme offers you an in-depth and comprehensive overview of AI and international law. It will address both the technical and legal aspects of AI, so whether you are a lawyer or a programmer, this academy will offer you the skills and knowledge to advance in your professional or academic career. Only few seats left, so make sure to book your seat now. Read more.  


Dr Marta Bo LL.M.