[New blog post] ‘Shifting the narrative: not weapons, but technologies of warfare’

Published 21 January 2022

@shutterstock

New technologies – especially those with embedded artificial intelligence (AI) algorithms, even if non-weaponised – are significantly transforming contemporary warfare. In a post for ICRC’s Humanitarian Law & Policy Blog, Asser researcher Klaudia Klonowska describes how contemporary warfare is being changed by AI algorithms, and calls for a dramatic shift in what we consider to be an important tool of warfare.  

‘Debates concerning the regulation of choices made by states in conducting hostilities are often limited to the use of weapons, but our understanding of weapons is outdated. New technologies – especially those with embedded artificial intelligence (AI) algorithms, even if non-weaponised – are significantly transforming contemporary warfare. The indirect influence of these technologies on warfare decisions is consistently underestimated.’

In her post for the ICRC Humanitarian Law & Policy Blog, Klaudia Klonowska, a researcher with the Asser Institute’s DILEMA project, unpacks a question that is often ‘glossed over and assumed by international legal scholars and state experts: what specifically are the ‘weapons, means or methods of warfare’ about which these regular debates are held?’ 

Klonowska calls for a shift in the narrative, which is expanded from a weapons-focused approach to additionally include other significant non-weaponised technologies of warfare. She argues that it is time to acknowledge that the choice of technologies may influence offensive capabilities just as much as the choice of weapons. 

Read the full blog here.  

Klaudia Klonowska is a PhD candidate in International Law at the Asser Institute. She studies the interactions of humans and AI-enabled decision-support systems in the military decision-making process and the consequences thereof to the exercise of (human) judgment under international humanitarian and human rights law. She is a member of the research project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA).

Designing International Law and Ethics into Military Artificial Intelligence (DILEMA)
The DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance with international law and accountability by design. 

Advance your knowledge on artificial intelligence and international law


The upcoming 4th edition of the (online) Winter academy on artificial intelligence and international law is an interdisciplinary programme that offers in-depth perspectives on AI and international law. It provides foundational knowledge on key issues at the intersection of theory and practice, and offers a platform for critical debate and engagement on emerging questions. The programme covers technical aspects of AI, philosophy and ethics of AI, AI and human rights, AI and international humanitarian law, AI and international responsibility, and international governance of AI. The course is aimed at researchers and advanced students, policy analysts and legal advisers working on innovation and technology in public or private institutions and industry professionals interested in the law and governance of AI. Register now.