[Interview] Jonathan Kwik: "I am bridging the gap between the technical and the legal domains"

Published 14 February 2024
By Sara Urso & Annika Mäkinen


Researcher Jonathan Kwik specialises in the laws regulating conduct of hostilities and artificial intelligence (AI). Today, 14 February, he will defend his PhD dissertation entitled: ‘Lawfully using autonomous weapon technologies: A theoretical and operational perspective’. Jonathan Kwik: “What is often missed by jurists is the factual, concrete understanding of what technology can do and what its limitations are”. An interview.  

 Autonomous weapons systems operate independently, executing their functions without direct human input. While artificial intelligence (AI) is not necessary for their operation, integrating AI in autonomous weapon systems can enhance their capabilities. AI is expected to have a significant impact on the future of war, combat, and national defence. In this interview, Jonathan discusses the use of AI technology in military operations, with a particular emphasis on targeting, and the applications of existing laws regulating warfare in this context.  

What is the main research project that you are working on at the moment? 
“I am currently looking at data-driven artificial intelligence technology that is used within a military context, and then specifically at its use for targeting: selecting and engaging the targets and the processing of intelligence that leads to the targeting process. I am looking at AI installed in systems like drones or an active defence system. When a military convoy is being led through a forest, for instance, one does not want insurgents to ambush you. When an active defence system detects a threat, it would automatically start shooting to prevent this. However, there are still some unsolved problems with these systems: first, does the system really attack the threats only, or might it also attack something that you do not want it to? How accurate is it? Can you really predict what will happen when you use these systems? For example, if you leave a drone on for half an hour, perhaps it works well for twenty-five minutes, but then the last five minutes it might attack a hospital. So, it is clear that there are problems that come with new technologies. However, targeting software is very much in demand as well. It helps defense, and it helps attack with increased accuracy, even in very difficult conditions like currently in Gaza. There is a tension between the advantages and the disadvantages. 
Why is this topic important? 
Right now, data-driven artificial intelligence technology for military targeting is already being used on the battlefield, so it is important to discuss and to study this, if only because this technology seems unavoidable. Although some activists and lawyers are trying to avoid the development of AI in military systems because they believe it is not ethical or too dangerous, I think it is almost impossible to prohibit the military use of AI because there are so many advantages to these systems. Hence, I believe we should study autonomous weapons in order to develop better policies and procedures for their military use, which I believe is a more productive way to approach this problem.  
You will be defending your PhD dissertation today. What are you hoping to achieve with your research?
“My research bridges the gap between law and technology. Recognising the limited technical expertise among jurists and the need for legal interpretation of emerging technologies, I aim to combine these perspectives. During the first year of my PhD, I delved into the technical aspects of the field. My dissertation leverages this understanding to introduce a crucial but often overlooked element to legal literature: the factual abilities and limitations of technology. Instead of advocating for entirely new laws, I propose reinterpreting existing international humanitarian law in light of AI's unique characteristics. This approach serves as the foundation for applying established legal frameworks to modern technology. For further ambition, consider this a potential first step towards a new AI-specific manual, similar to the Tallinn Manual. Think of it like this: when unforeseen technologies like cyberweapons and synthetic viruses arose, the Tallinn Manual emerged to guide the interpretation of existing international humanitarian laws. I believe a similar manual is crucial for AI, and my dissertation aims to pave the way. 

 What has been your proudest or maybe your most challenging moment as a researcher so far? 
In general, I think the most challenging aspect of being a researcher is the very beginning, when you are just starting your career and have no name or publications yet. Getting that first article published is so difficult because journals try to look you up and they cannot find anything. You are not a doctor yet, and your network is limited, so you do not yet have that reach. Once you have a few publications and a reasonable network, the work snowballs. It becomes easier and easier. My proudest moment probably is the two times I was invited to speak in Geneva by the United Nations Office for Disarmament Affairs (UNODA). Every year, the Group of Governmental Experts on Lethal Autonomous Weapons host a conference where they discuss the best policy approaches on the topic. The UNODA invited me to present two lectures for diplomats, so they have a better understanding of the technology. 

Why did you choose to study international law and autonomous weapons specifically? 
I find the combination of international law and autonomous weapons interesting. It is quite a new field, and the use of artificial intelligence will only expand. Plus, it is a field that I think is also relatively future-proof. It has a lot of future, but not yet a lot of expertise, as there are only a few jurists that also understand the technical aspects. And, from a more practical perspective, I chose to study international law more generally because it is applicable everywhere. I come from Indonesia, and I studied international law in the Netherlands for my bachelor's and my master's, not really because I found it so interesting in the beginning, but because you can apply it everywhere. 

If you were to offer advice to young academics or other people who are trying to enter this field, what would it be? 
“As I mentioned before, starting your career in this field can be tough, and I've seen it reflected in many young scholars. Getting those first articles published is a hurdle, so do not let rejection discourage you. But specifically for this field, remember: engagement with technology is crucial. It might seem daunting to delve into computer engineering, programming, and complex concepts like supervised learning. But if you want to analyse how the law interacts with technology, it is your responsibility, both personally and professionally, to grasp the fundamentals. Do not shy away from multidisciplinary approaches. In fact, I highly recommend finding someone passionate about the same subject and collaborating on an article. This brings several benefits: you gain their expertise, have someone to discuss ideas with before submission, and share the burden of critiques, which can feel less personal when you are working together. Plus, collaborations often lead to lasting friendships. To give you an example, I once reached out to one of my co-authors because we had similar views on autonomous weapon technology. We immediately hit it off on that basis and then we wrote a few articles together. In another case, one of my supervisors received an e-mail from the PhD candidate of his old friend in Maastricht. This person contacted him for a discussion, and my supervisor invited me to join. The discussion went well and afterwards, we decided to draft a paper together and we got a nice article out of it. It was not planned at all, but it can work, and now she is a very good friend. 

Jonathan will defend his dissertation at the Agnietenkapel on 14 February from 10.00-11.30. You can watch his PhD defence ceremony online here. 

About Jonathan Kwik
Jonathan Kwik is a researcher at the T.M.C. Asser Instituut attached to the ELSA Lab project. His specialisation is in the laws governing the conduct of hostilities and artificial intelligence (AI). He wrote his doctoral dissertation at the Faculty of Law of the University of Amsterdam under Profs. Tom van Engers, Terry Gill and Harmen van der Wilt on the lawful use of AI-embedded weapon systems at the operational level. He holds a Master of Laws degree (cum laude) from the University of Groningen in international criminal law and criminology, and a Bachelor of Laws degree from the University of Groningen in international law. He is a member of the Board of Experts of the Asia-Pacific Journal of International Humanitarian Law (APJIHL). He is an academic partner of the International Committee of the Red Cross (ICRC), and has worked together with the ICRC on many occasions in matters of research, policy discussions, and dissemination events. He taught international humanitarian law and public international law at the Soegijapranata Catholic University in Semarang, Indonesia. He has published extensively in the fields of international humanitarian law, targeting law, AI modelling, (international) criminal responsibility and post-conflict reconciliation.

Upcoming spring academy: Artificial intelligence and international law
Are you interested in all things AI? From 22 -26 April 2024, the Asser Institute will host its 6th annual spring academy on ‘Artificial intelligence and international law.’ This interdisciplinary training programme offers you an in-depth and comprehensive overview of AI and international law. It will address both the technical and legal aspects of AI, so whether you are a lawyer or a programmer, this academy will offer you the skills and knowledge to advance in your professional or academic career. Seats are limited, so make sure to book your seat now. Read more

[Annual Lecture 2024] ‘Connection in a divided world: Rethinking ‘community’ in international law’ by Fleur Johns
On April 25, professor Fleur Johns, a recognised expert on international law and on the role of automation and digital technology in global legal relations, will deliver the 9th Annual T.M.C. Asser Lecture in the Peace Palace in The Hague. She will explore the concept of ‘community’ in today's international law, especially in the context of humanitarianism. As technology has radically changed the ways in which we connect, communicate, share values with each other, exercise power, and engage in conflict, the concept of ‘community’ in international law is once more in contention. Register now. Read more.

Dr Jonathan Kwik