Designing International Law and Ethics into Military AI (DILEMA)

Research project (2020-2024) 

Project leader: Dr Berenice Boutin

SUMMARY

This project explores the conditions and modalities that would allow to leverage the potential benefits of AI technologies in the military, while abiding by the rule of law and ethical values. It seeks to ensure that technologies developed to assist in decision-making do not in reality substitute for critical judgement by human agents, and thereby remain under human control.

An interdisciplinary research team will work in dialogue and together with partners to address the ethical, legal, and technical dimensions of the project. First, research will be conducted on the foundational nature of the pivotal notion of human agency, so as to unpack the fundamental reasons why human control over military technologies must be guaranteed. Second, the project will identify where the role of human agents must be maintained, in particular to ensure legal compliance and accountability. It will map out which forms and degrees of human control and supervision should be exercised at which stages and over which categories of military functions and activities. Third, the project will analyse how to technically ensure that military technologies are designed and deployed within the ethical and legal boundaries identified.

Throughout the project, research findings will provide solid input for policy and regulation of military technologies involving AI. In particular, the research team will translate results into policy recommendations for national and international institutions, as well as technical standards and testing protocols for compliance and regulation.

The Project Team includes project leader Dr Berenice Boutin (Asser Institute, University of Amsterdam), Prof. Dr Terry Gill (University of Amsterdam), and Prof. Dr Tom van Engers (University of Amsterdam; TNO).

Project Partners include TNO, Thales Nederland, the Hague Centre for Strategic Studies (HCSS), PAX, the Graduate Institute Geneva (IHEID), the International Society for Military Law and Law of War (ISMLLW), the Ministry of Defence, the Ministry of Foreign Affairs, the Municipality of The Hague.

The project is funded by the Dutch Research Council (NWO) under the research programme ‘Responsible Innovation. Designing for public values in a digital world’. 

NEWS & UPDATES 

  • Organisation of an Expert Roundtable on ‘Trading Emerging Technologies: Security and Human Rights Perspectives’, together with the Utrecht University’s Project on ‘Disrupting Technological Innovation? Towards an Ethical and Legal Framework’ (15 September 2020, Asser Institute)
  • Project leader Dr Berenice Boutin speaks at a Roundtable on Military Use of AI at the Data Science & Law Forum: Tools & Rules for Responsible AI organised by Microsoft (3 March 2020, Brussels)
  • Vacancy published for a Post-Doctoral Researcher with a focus on philosophy and ethics of technologies in the military context (26 February 2020)
  • Vacancy published for a PhD Candidate in Public International Law with a focus on law and technologies in the military context (26 February 2020)

RESEARCH QUESTIONS AND METHODOLOGY 

The deployment of artificial intelligence (AI) technologies in the military context has the potential to greatly improve military capabilities and to offer significant strategic and tactical advantages. At the same time, the use of increasingly autonomous technologies and adaptive systems in the military context poses profound ethical, legal, and policy challenges. This project explores the conditions and modalities that would allow to leverage the potential benefits of AI technologies in the military, while abiding by the rule of law and ethical values.

The ethical and legal implications of the potential use of AI technologies in the military has been on the agenda of the United Nations, governments, and non-governmental organisations for several years. In critical warfare functions, reliance on autonomous intelligent systems is indeed highly controversial and should be carefully assessed against ethical values and legal norms. Fully autonomous weapon systems constitute a hard red line considered by many as irreconcilable with international humanitarian law and public values of human dignity and accountability. Yet, the question of where and how to draw this red line is not settled.  Besides, the potential applications of AI in the military context are considerably broader than merely the issue of autonomous weapons. The capacity of AI technologies to collect and process vast amount of information at a scale and speed that goes beyond human cognitive abilities will likely impact on many aspects of decision-making in a broad spectrum of activities which take place in the planning and conduct of military operations, ranging from reconnaissance and intelligence gathering, to prioritising and identifying potential targets in an operational setting.

AI technologies could be usefully and lawfully deployed to support and improve military action, and to assist human decision-making, but clear lines must delimit in which situations and to which extent military decisions at the strategic, operational, and tactical levels should be supported by AI systems. AI technologies that are supposed to merely assist in decision-making remain formally under direct human supervision, yet, questions can be raised whether certain technologies deployed in order to assist in decision-making do not in reality substitute for human decisions. It is therefore essential to critically address the questions of where and how the role of human agents should be maintained throughout the development and deployment of military AI technologies.

The project will investigate the question of how to ensure that military AI technologies support but never replace critical judgement and decision-making by human soldiers. In order to answer this main question, three sub-questions will be addressed: (1) why it is essential to guarantee human involvement over certain functions and activities, (2) where it is most critical to maintain the role of human agents, in particular to ensure legal compliance and accountability, and (3) how to technically ensure that military technologies are designed and deployed in line with ethical and legal frameworks.

The project will be carried out by an interdisciplinary team working in dialogue and together with partners to address the ethical, legal, and technical dimensions of the research question. Three researchers will be recruited to respectively focus on why, where, and how to maintain human control over military AI technologies. In parallel, two transversal tracks will seek to strengthen interdisciplinary dialogue and to build a consensus public values for military AI.

  • The Foundational Nature of Human Agency

A post-doctoral researcher with a background in philosophy and ethics will conduct research aimed at pinpointing fundamental rationales for safeguarding human agency when developing or deploying highly-automated military technologies. The post-doctoral researcher will engage in fundamental research on the functions of human agency, and analyse how current or future AI systems could affect these functions. She/he will test the assumption on which builds this proposal that human agency enables the realisation of public values and is a precondition to compliance with international norms and accountability. It is expected that research on human agency will reveal the intrinsically interdisciplinary dimension of the question of legal compliance and accountability in relation to military AI.

  • Compliance and Accountability in the Acquisition and Deployment of Military AI

A PhD researcher will investigate the international legal framework applicable to military AI technologies and deliver an in-depth study on compliance with, and accountability for violations of, international law in the deployment and acquisition of military AI technologies.  Established legal standards of international humanitarian law and the law of armed conflict apply to any military technology which development or deployment is envisaged, and the failure of a state to comply with its international obligations engages its responsibility. However, the question of how precisely compliance with international obligations and humanitarian principles can be achieved when deploying military AI needs to be further explored. The PhD researcher will adopt a granular and context-specific approach that will distinguish different categories of military activities and types of AI technologies. Furthermore, the PhD researcher will develop frameworks for compliance specifically at the stage of acquisition of military technologies. Indeed, international norms must also be fully complied with when seeking to acquire new military equipment and technologies. The relatively unexplored question of legal compliance at the stage of acquisition will further be addressed in collaboration with other members of the research team with the goal of developing policy and technical guidance to verify compliance.

  • Integrating Legal Standards in the Design of Military AI Technologies

Building on research on the ethical and legal boundaries which should guide and limit the development and deployment of military technologies, a post-doctoral researcher with a background in computer science and systems engineering will work on how to integrate these norms and values in the design of autonomous technologies. In particular, the post-doctoral researcher will develop methods and protocols on the integration of international legal norms in military technologies, as well as standards and processes to verify and certify compliance of military AI systems with international law. The project will not aim at writing systems specifications, but rather at highlighting which elements should be looked at in order to test and certify that a given AI component or system is designed and can be deployed in compliance with international law. Furthermore, the post-doctoral researcher will test to which extent systems optimisation in line with international norms can converge with the goal of safe and effective AI technologies.

  • The Hague Consensus on Public Values for Military AI

One of the persisting hurdles faced by current research on responsible (military) AI is the lack of consensus and shared understanding on which public values ought to be safeguarded, as well as how value-alignment can best be achieved when developing AI. This project will aim to bridge this gap and to bring new thoughts and solutions by: (1) setting up a platform aimed at reaching an interdisciplinary consensus on which public values should guide the design and use of military AI, (2) conceptualising international legal norms applicable to military operations (in particular customary international humanitarian law) as actionable expressions of broader public values, and (3) developing engineering methods to assign, optimise, and quantify public values and international standards as goal-functions. In order to build this consensus, biannual meetings will be held in The Hague with experts from ethics, law, military, computer science, systems engineering, psychology, and other relevant disciplines. It is expected that truly interdisciplinary dialogue and mutual engagement will contribute to reaching a shared understanding on the identification, interpretation, hierarchisation, and quantification of public values for military AI. The project also envisages the involvement of the general public in building the consensus, notably with an online survey on identifying values for military AI.

  • Interdisciplinary Boundaries Research Paper Series

Throughout the project, the research team and partners will produce interdisciplinary open-access papers exploring specific topical questions at the boundaries of research fields. About two papers each year will be co-authored by at least two members of the research team or consortium with distinct disciplinary backgrounds. For instance, a number of humanitarian principles and values relevant to the conduct of armed conflict are situated at the interface of law and ethics. The topic of certification standards for lawful military AI is also at the intersection of policy, law, and computer science.  

SOCIETAL RELEVANCE AND IMPACT

Research produced as part of this project will contribute to shaping public debates and policy developments on the critical societal issues raised by the use of AI technologies in the military. The question of how to concretely ensure that human soldiers remain in control of military technologies is of great importance and concern for the armed forces, national and international governing institutions, and the general public. This project will provide a platform for discussion and exchange amongst stakeholders and result in solid input for policy and regulation.

The valorisation and dissemination activities and products in this project include:

  • Policy Guidelines on Maintaining Human Control over AI Technologies in the Military

The research team will produce background papers and reports highlighting the main policy implications of the research results, and on this basis will formulate detailed recommendations for a responsible use of AI in the military. The project will specifically develop detailed policy guidelines on where and how to maintain human control in order to ensure compliance with, and accountability for violations of, international law in the design, acquisition, and deployment of military technologies involving AI. The policy guidelines will be developed with the involvement of relevant stakeholders from the military and the industry, and in dialogue with policy makers at the national, European, and international levels, as well as non-governmental organisations.

  • Technical Standards and Protocols for Testing and Certification of Compliance

Policy guidance will further be operationalised with the development of technical tools to test and certify whether AI systems which development, acquisition, or deployment is envisaged technically meet ethical and legal thresholds. Together with industry partners and policy stakeholders, the research team will develop standards and protocols aimed at safeguarding human agency and ensuring international legal compliance by design. The technical guidance will be of direct use to both industry and governmental partners.

  • The Hague Testing Lab on AI Compliance with International Law

Policy and technical guidelines will further be concretised and applied by setting up a testing lab in The Hague on AI compliance with international law. Initially, the lab will focus on military applications of AI, and their compliance with the law of armed conflict and military operations. Later, the lab could extend to other applications of AI, and offer features to test compliance with international law more generally. This unique initiative would have the potential to become an internationally leading testing lab for AI and international law, in the ideal location of The Hague.

  • Rules of Engagement for Deploying Military AI in Compliance with International Law

In collaboration with stakeholders such as the Ministry of Defence and NATO, the research team will seek to assess, adjust and supplement existing Rules of Engagement, and to draft model Rules of Engagement which incorporate and implement thresholds and modalities of human control over military AI.

  • Professional Trainings

Tailored trainings will be offered to members of the armed forces and policy makers. Together with partners such as the Netherlands Defence Academy, the project team will develop a flexible curriculum with modules on ethical, legal, technical, and policy aspects. The trainings will involve an active coaching approach, as well as serious gaming, simulations and interactive exercises on concrete scenarios of human-machine partnerships.

  • Innovative Outreach Activities

Public outreach activities aimed at a broad dissemination of, and engagement with, research findings will be rolled out throughout the project. They will include events and lectures in The Hague, online non-scientific publications, multimedia products such as podcasts and videos, and innovative activities such as knowledge cafés, and a survey and hackathon on public values for military AI.