[Interview] Tomasz Zurek: ‘Soon, autonomous military devices will appear in almost every conflict in the world.'Published 14 January 2022
By Diva Estanto and Alex Rijpma
Dr Tomasz Zurek is a researcher at the Asser Instituut, where he works in the DILEMA project to research different technologies in artificial intelligence (AI) in order to create a system that can observe international humanitarian law. Zurek: ‘Soon, autonomous military devices will appear in almost every conflict in the world, and it's crucial to construct them in such a way that they follow the rules of international humanitarian law.” An interview.
What is the main research project you are working on at the Asser Institute?
“I'm in the DILEMA project: Designing the ‘International law and ethics in military Artificial Intelligence (AI)’, where I follow two parallel research directions. The first one is the formal modelling of the international humanitarian law rules (for example, the proportionality rule). The second one, in collaboration with some colleagues from the Institute of Informatics of the University of Amsterdam, is the construction of a structure for a military decision-making system that can follow international humanitarian law requirements. Both tasks require drafting a list of requirements that international humanitarian law imposes on autonomous decision-making systems.”
What do you see as the biggest impact of artificial intelligence on international law?
“In general, artificial intelligence has had a huge and still growing impact on our lives. AI can have an enormous impact not only on our individual lives, but also on the international community. AI can impact human rights in a positive and negative way, and it depends on us whether we can influence that impact. In the military aspect, fully autonomous military devices are, in fact, just around the corner. Soon, autonomous military devices will appear in almost every conflict in the world, and it's crucial to construct them in the way they follow the rules of international humanitarian law (IHL). Uncontrolled development of military AI devices can make them dangerous not for the combatants only, but for the civilians too. So, that's why I think that our research is very important.”
Do you have specific goals that you hope to achieve while working at the Asser Institute?
“My first goal is to be more involved with and better understand the international law. The second goal is to finish the project that I started here. AI devices which follow rules of international law and ethical principles, are very interesting research topic. I think that the creation of such a mechanism, or at least creating a framework allowing for the creation of such a system, is a challenging task.”
What initially drew you to AI as a topic?
“When I finished my studies and I decided to start my PhD studies, I was interested in one of the narrow aspects of artificial intelligence: knowledge-based systems. In particular, I was interested in how to represent human knowledge and reasoning in a computational way; how to transform our way of reasoning, into the computer. My PhD was devoted to using such systems in banking. That was my introduction to AI. After the defence of my PhD, my supervisor pointed out that Artificial Intelligence in law seems to be a very promising research direction. That was my first step into it. And then, I submitted my first paper to my first AI and law conference, it was accepted and then I joined the community, in which I am till now.”
How would you describe your relationship with the Institute of Computer Science?
“I am a computer scientist, so my relations with Institute of Computer Science are, let’s say, natural. Of course, I also have experience in dealing with lawyers. Before I joined the Asser Institute, I used to cooperate with lawyers that worked on the legal side of our papers. Although I'm a computer scientist, my research interests are located somewhere in between the computer science, law and, sometimes, philosophy. The intersection of those disciplines is very interesting and connecting those different worlds is a very fascinating area of research.”
What has been the biggest challenge that you encountered during your time as a researcher?
“I think that every new thing you start is the biggest challenge. Every time that I start a new project, or prepare a new paper, I always believe that it will become the best one. So now, the most challenging thing is the project that I'm doing here at the Asser Institute (laughs). Previously, of course, there were other challenging projects. I have prepared some papers for Expert Systems with Applications, for instance, which is quite a prestigious journal. That was a huge task, which took me more than a year of preparation.”
Do you have any advice to young people who are interested in the same kind of work that you are doing?
“My most important advice for young people is to read and to try to understand research papers. The second advice is: be ambitious! When I recall my memories, I think that the best results I’ve achieved were when I tried things that were really, really challenging and really ambitious. Of course, you will not always succeed. But in general, I believe that we should try to touch the sky. Even if you fail, it's better to fail with overly ambitious plans than to fail with too simple ones.” (smiles).
And what has been your proudest moment as a researcher thus far?
“My proudest moment was during my first international conference on artificial intelligence and law (ICAIL). It was my introduction to the community and I met all those people whose papers had inspired me in the past. I had to present my paper on a huge stage, and I was very frightened. But after all, I was very proud. Of course, later on, I went to next editions of this conference many, many times, but the very first time made me very proud.”
About Dr Tomasz Zurek
Tomasz Zurek is a post-doc researcher at the T.M.C. Asser Institute in the research strand ‘Human dignity and human security in international and European law’. He holds a master’s degree in management (1999) and doctorate in computer science (2004; the dissertation concerned the utilisation of artificial intelligence in banking). His current scientific interests focus on representation of legal knowledge and modelling of legal reasoning and argumentation, especially the modelling of informal ways of reasoning.
For a number of years Tomasz has worked as assistant professor at the Institute of Computer Science at Maria Curie Sklodowska University in Lublin, Poland (currently on sabbatical), successfully combining research and teaching. His recent activities also include a research visit at Swansea University (2020), the position of Artificial Intelligence Expert for Deep Clue sp. z o. o. (2020 – 2021), as well as work on grant projects and initiatives dedicated to sharing, supporting, and popularising knowledge.
Tomasz has authored and co-authored over 50 peer-reviewed papers. He belongs to the International Association of Artificial Intelligence and Law and is a member of the steering committee of the ArgDiaP Association, whose main goal is coordination of the activities of the Polish School of Argumentation. Tomasz has also served as a member of the program committees of a number of renowned conferences and workshops devoted to Artificial Intelligence and Law.
Advance your knowledge on artificial intelligence and international law
Our upcoming online Winter Academy on artificial intelligence and international law is an interdisciplinary programme that offers in-depth perspectives on AI and international law. It provides foundational knowledge on key issues at the intersection of theory and practice, and offers a platform for critical debate and engagement on emerging questions. The programme covers technical aspects of AI, philosophy and ethics of AI, AI and human rights, AI and international humanitarian law, AI and international responsibility, and international governance of AI. Register now.
Value-based reasoning in autonomous agents (International journal of computational intelligence systems)
The issue of decision-making of autonomous agents constitutes the current work topic for many researchers. In this paper, we (Tomasz Zurek and Michail Mokkas) propose to extend the existing model of value-based teleological reasoning by a new, numerical manner of representation of the level of value promotion. We present and discuss proofs of compatibility of both previous and current models, a formal mechanism of conversion of the parameters of the autonomous device into the levels of promotion of values, the mechanism of integration with machine learning approaches, and a comprehensive argumentation-based reasoning mechanism allowing for making decisions.
Conflicts in legal knowledge base (Foundations of computing and decision sciences)
The simulation of inference processes performed by lawyers can be seen as one way to create advisory legal system. In order to simulate such a process as accurately as possible, it is indispensable to make a clear-cut distinction between the provision itself, and its interpretation and inference mechanisms. This distinction would allow for preserving both the universal character of the provision and its applicability to various legal problems. The authors main objective was to model a selected legal act, together with the inference rules applied, and to represent them in an advisory system, focusing on the most accurate representation of both the content and inference rules. Given that the laws which stand in contradiction prove to be the major challenge, they will constitute the primary focus of this study.
The DILEMA Project (Designing international law and ethics into military AI)
This project, led by Asser senior researcher Dr Berenice Boutin, looks into the interdisciplinary perspectives on military applications of artificial intelligence (AI). It has a focus on legal, ethical and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance with international law and accountability by design. The project investigates why it is essential to safeguard human agency over certain functions and activities, where it is most critical to maintain the role of human agents, and how to technically ensure that military technologies are designed and deployed in line with ethical and legal frameworks.