Research Project

The deployment of artificial intelligence (AI) technologies in the military context has the potential to greatly improve military capabilities and to offer significant strategic and tactical advantages. At the same time, the use of increasingly autonomous technologies and adaptive systems in the military context poses profound ethical, legal, and policy challenges. This project explores the conditions and modalities that would allow to leverage the potential benefits of AI technologies in the military, while abiding by the rule of law and ethical values.

The ethical and legal implications of the potential use of AI technologies in the military has been on the agenda of the United Nations, governments, and non-governmental organisations for several years. In critical warfare functions, reliance on autonomous intelligent systems is indeed highly controversial and should be carefully assessed against ethical values and legal norms. Fully autonomous weapon systems constitute a hard red line considered by many as irreconcilable with international humanitarian law and public values of human dignity and accountability. Yet, the question of where and how to draw this red line is not settled.  Besides, the potential applications of AI in the military context are considerably broader than merely the issue of autonomous weapons. The capacity of AI technologies to collect and process vast amount of information at a scale and speed that goes beyond human cognitive abilities will likely impact on many aspects of decision-making in a broad spectrum of activities which take place in the planning and conduct of military operations, ranging from reconnaissance and intelligence gathering, to prioritising and identifying potential targets in an operational setting.

AI technologies could be usefully and lawfully deployed to support and improve military action, and to assist human decision-making, but clear lines must delimit in which situations and to which extent military decisions at the strategic, operational, and tactical levels should be supported by AI systems. AI technologies that are supposed to merely assist in decision-making remain formally under direct human supervision, yet, questions can be raised whether certain technologies deployed in order to assist in decision-making do not in reality substitute for human decisions. It is therefore essential to critically address the questions of where and how the role of human agents should be maintained throughout the development and deployment of military AI technologies.

The project will investigate the question of how to ensure that military AI technologies support but never replace critical judgement and decision-making by human soldiers. In order to answer this main question, three sub-questions will be addressed: (1) why it is essential to guarantee human involvement over certain functions and activities, (2) where it is most critical to maintain the role of human agents, in particular to ensure legal compliance and accountability, and (3) how to technically ensure that military technologies are designed and deployed in line with ethical and legal frameworks.

The project will be carried out by an interdisciplinary team working in dialogue and together with partners to address the ethical, legal, and technical dimensions of the research question. Three researchers will be recruited to respectively focus on why, where, and how to maintain human control over military AI technologies. In parallel, two transversal tracks will seek to strengthen interdisciplinary dialogue and to build a consensus public values for military AI.

  • The Foundational Nature of Human Agency

A post-doctoral researcher with a background in philosophy and ethics will conduct research aimed at pinpointing fundamental rationales for safeguarding human agency when developing or deploying highly-automated military technologies. The post-doctoral researcher will engage in fundamental research on the functions of human agency, and analyse how current or future AI systems could affect these functions. She/he will test the assumption on which builds this proposal that human agency enables the realisation of public values and is a precondition to compliance with international norms and accountability. It is expected that research on human agency will reveal the intrinsically interdisciplinary dimension of the question of legal compliance and accountability in relation to military AI.

  • Compliance and Accountability in the Acquisition and Deployment of Military AI

A PhD researcher will investigate the international legal framework applicable to military AI technologies and deliver an in-depth study on compliance with, and accountability for violations of, international law in the deployment and acquisition of military AI technologies.  Established legal standards of international humanitarian law and the law of armed conflict apply to any military technology which development or deployment is envisaged, and the failure of a state to comply with its international obligations engages its responsibility. However, the question of how precisely compliance with international obligations and humanitarian principles can be achieved when deploying military AI needs to be further explored. The PhD researcher will adopt a granular and context-specific approach that will distinguish different categories of military activities and types of AI technologies. Furthermore, the PhD researcher will develop frameworks for compliance specifically at the stage of acquisition of military technologies. Indeed, international norms must also be fully complied with when seeking to acquire new military equipment and technologies. The relatively unexplored question of legal compliance at the stage of acquisition will further be addressed in collaboration with other members of the research team with the goal of developing policy and technical guidance to verify compliance.

  • Integrating Legal Standards in the Design of Military AI Technologies

Building on research on the ethical and legal boundaries which should guide and limit the development and deployment of military technologies, a post-doctoral researcher with a background in computer science and systems engineering will work on how to integrate these norms and values in the design of autonomous technologies. In particular, the post-doctoral researcher will develop methods and protocols on the integration of international legal norms in military technologies, as well as standards and processes to verify and certify compliance of military AI systems with international law. The project will not aim at writing systems specifications, but rather at highlighting which elements should be looked at in order to test and certify that a given AI component or system is designed and can be deployed in compliance with international law. Furthermore, the post-doctoral researcher will test to which extent systems optimisation in line with international norms can converge with the goal of safe and effective AI technologies.

  • The Hague Consensus on Public Values for Military AI

One of the persisting hurdles faced by current research on responsible (military) AI is the lack of consensus and shared understanding on which public values ought to be safeguarded, as well as how value-alignment can best be achieved when developing AI. This project will aim to bridge this gap and to bring new thoughts and solutions by: (1) setting up a platform aimed at reaching an interdisciplinary consensus on which public values should guide the design and use of military AI, (2) conceptualising international legal norms applicable to military operations (in particular customary international humanitarian law) as actionable expressions of broader public values, and (3) developing engineering methods to assign, optimise, and quantify public values and international standards as goal-functions. In order to build this consensus, biannual meetings will be held in The Hague with experts from ethics, law, military, computer science, systems engineering, psychology, and other relevant disciplines. It is expected that truly interdisciplinary dialogue and mutual engagement will contribute to reaching a shared understanding on the identification, interpretation, hierarchisation, and quantification of public values for military AI. The project also envisages the involvement of the general public in building the consensus, notably with an online survey on identifying values for military AI.

  • Interdisciplinary Boundaries Research Paper Series

Throughout the project, the research team and partners will produce interdisciplinary open-access papers exploring specific topical questions at the boundaries of research fields. About two papers each year will be co-authored by at least two members of the research team or consortium with distinct disciplinary backgrounds. For instance, a number of humanitarian principles and values relevant to the conduct of armed conflict are situated at the interface of law and ethics. The topic of certification standards for lawful military AI is also at the intersection of policy, law, and computer science.