[Interview] Asser researcher Berenice Boutin: “Now is the moment to govern and regulate Artificial Intelligence”

Published 19 December 2019

Artificial Intelligence (AI) is one of the most promising, complex, and fast advancing current technological developments ©Shutterstock.

‘Artificial Intelligence (AI) is everywhere. The technology is advancing fast and policy makers who are concerned with responsible innovation need to catch up and take ownership of the topic’, says Asser Institute researcher Dr Berenice Boutin. She is project leader of the 2020 winter academy on Artificial Intelligence and International Law and recently won an NWO research grant to look into on the ethical and lawful uses of military AI. An interview about AI and the need of making sure that technology is developed in accordance with our legal systems and human rights. ‘Now is the moment to govern and regulate AI’.

What exactly is Artificial Intelligence?
“Artificial Intelligence (AI) is one of the most promising, complex, and fast advancing current technological developments. The term AI refers to computer systems that exhibit abilities to perform problem-solving, predictive analysis and other cognitive tasks. AI is fuelled by related technological developments, such as the increasing availability of data in the digitised society and advancements in robotics. But defining artificial intelligence is complex, and that is exactly why we will devote time in our winter academy on understanding AI, its historical trajectory and the current trends. Experts from computer science with lots of background will explain the topic in detail.”

Could you give us some examples in which AI is being used today?
“These days AI is used everywhere. In your household appliances or your mobile device, in self-driving cars and in the security and welfare systems of cities and governments. What is specific about our training programme is that we look at the interface of AI and international law. As a research institute on international and European law, we are less concerned with consumer product liability for instance. What makes our winter academy unique is that we look at where AI is being used, or could be used in areas that deal with international matters. We focus on AI and international law, international governance and international human rights. Two topics that will be discussed in the training programme, for instance, are ‘Opportunities and challenges of using AI for peace and justice’ and ‘AI technologies in the context of border control’. We will also look at the use of autonomous weapon systems, or ‘killer robots’ as some people call them.”

Why is it important to learn about AI?
“In recent years, advanced self-learning algorithms have raised a number of concerns. In order to achieve the potential benefits of new technologies such as AI, it is crucial to confront the critical challenges they raise with regard to transparency, privacy, equality, and accountability. Safeguarding fundamental values and ensuring respect for human rights is one of the most important issues of current AI. Because of their complex inner-workings and autonomous capabilities, machine-learning algorithms can reach results that humans are not able to explain. Experts agree that it is essential to confront this issue of transparency of the reasoning process of algorithms, and to provide for a ‘right to explanation’.

It is also very important to tackle the issue of bias in algorithmic decision-making (which stems from bias in humans and in data sets) in order to develop globally beneficial AI. In recent years, many algorithms used in public services, such as health, welfare, or policing, have been found to lead to biased and discriminatory decisions. For instance, recently there has been a lot of backlash against the increasing use of facial recognition technologies in the context of law enforcement. A recent UK report found that ‘Police officers themselves are concerned about the lack of safeguards and oversight regarding the use of algorithms in fighting crime’. These types of dilemmas raised by AI, and possible solutions grounded in the rule of law, will be discussed at the winter academy.”

You have recently won a research grant to look into on the ethical and lawful uses of military AI. What will you be researching?
“Another central issue of new technologies like AI is accountability. The increasing use of autonomous technologies and adaptive systems in the military context, poses profound ethical, legal, and policy challenges. AI technologies have the potential to greatly improve military capabilities and offer significant strategic and tactical advantages. In order to leverage the potential benefits of AI technologies and human-machine partnerships in the military while abiding by the rule of law and ethical values, it is essential that technologies developed to assist in decision-making do not in reality substitute for human decisions and actions.

This new research project sets out to ensure that military AI technologies remain in line with international norms and ethical values. The research team will analyse why human control over military technologies must be guaranteed, where it is most critical to maintain the role of human agents (in particular to ensure legal compliance and accountability), and how to technically ensure that military AI-based technologies are designed and deployed in line with public values and the rule of law. On this basis, we will seek to operationalise public values into policy and technical solutions.”

What would you advice policymakers, computer scientists and academics working on AI today?
“Today is the moment to act in terms of governance and regulation. Technology is not created in a vacuum; it is a choice to decide what technology we develop and how we develop it. Artificial Intelligence is advancing fast and policy makers who are concerned with responsible innovation need to catch up and take ownership of the topic, to make sure that technologies are developed in accordance with the rule of law and human rights. Our winter academy on AI and international law was specifically designed for anyone interested in the international governance of technology. Some of our participants have a background in computer science, while others have a legal or a policy background. The aim of our winter academy is to benefit people from both backgrounds. That is why we have designed the programme in such a way that the first sessions include lectures that will help people who do not have a lot of technical background to understand what is AI, how is it used and what are the challenges from a technological perspective. Other lectures will give a good legal background to participants who have a technical background but are less familiar with legal and ethical aspects of AI.”

More info about the winter academy on Artificial Intelligence and International law
2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate. Stay ahead of the curve by signing up for our winter academy on Artificial Intelligence and International Law (20-24 January 2020). Top speakers will give you the latest insights into the current and future issues raised by AI from the perspective of international law. Our winter academy offers you foundational knowledge on key issues at the interface of international law and AI, and provides a platform for critical debate and engagement on emerging questions. The programme is structured along five themes: Understanding AI, AI for good, AI and armed conflict, AI and responsibility, and AI governance. Are you a policymaker, industry professional, or academic researcher working on issues related to AI and international law? Have a look at the programme here. For more information or to register for the training programme click here.