Designing International Law and Ethics into Military AI (DILEMA)

Sep 1, 2020 - Dec 31, 2024

Project description

This project explores the conditions and modalities that would allow for the leveraging of the potential benefits of AI technologies in the military, while abiding by the rule of law and ethical values. It seeks to ensure that technologies developed to assist in decision-making do not in reality substitute for critical judgement by human agents, and thereby remain under human control.

An interdisciplinary research team will work in dialogue and together with partners to address the ethical, legal, and technical dimensions of the project. First, research will be conducted on the foundational nature of the pivotal notion of human agency, so as to unpack the fundamental reasons why human control over military technologies must be guaranteed. Second, the project will identify where the role of human agents must be maintained, in particular to ensure legal compliance and accountability. It will map out which forms and degrees of human control and supervision should be exercised at which stages and over which categories of military functions and activities. Third, the project will analyse how to technically ensure that military technologies are designed and deployed within the ethical and legal boundaries identified.

Throughout the project, research findings will provide solid input for policy and regulation of military technologies involving AI. In particular, the research team will translate results into policy recommendations for national and international institutions, as well as technical standards and testing protocols for compliance and regulation.

Project leader is Berenice Boutin. The project is funded by the Dutch Research Council (NWO) Platform for Responsible Innovation (NWO-MVI).

Project website:

Twitter: @DILEMA_project