DILEMA Statement on the Global Governance of Artificial Intelligence in the Military

This statement provides suggestions and recommendations to policy-makers at the international level on how to approach and address the complex challenges raised by the design, development, and use of artificial intelligence (AI) in the military domain. It builds upon extensive interdisciplinary research conducted by the DILEMA project on Designing International Law and Ethics into Military Artificial Intelligence.

Download in PDF

1. The impact of AI in the military goes far beyond autonomous weapons systems, and a wide range of other military applications of AI must be included in international regulatory debates. In particular, the use of AI-driven decision support systems for intelligence, surveillance, and target acquisition raises important legal and ethical challenges. These challenges need to be proactively addressed prior to the adoption and deployment of new technologies of warfare. AI governance should also account for the compounded impacts of multiple integrated and interacting AI applications across military systems, as well as attend to the broader implications of AI for military strategies and practices.

2. The notion of human agency is a nuanced and useful conceptual tool to capture the various ways in which humans and AI systems influence each other. While meaningful human control (MHC) has been a helpful concept to frame initial debates on autonomous weapons systems, it fails to address the complexities of dynamic relationships and interdependent interactions between humans and AI systems. These complexities can be better analysed through the notion of human agency, which relates to deliberative reasoning processes, perceptions, goal setting, and decision making.

3. AI systems should be carefully and responsibly designed in a way that promotes the exercise of human agency, remains within moral boundaries, and ensures compliance with international law. International law constitutes a well-established and internationally agreed set of norms and principles that must guide and limit the design and development, as well as deployment, of military AI systems. Moreover, design and regulation are important ways in which humans exercise agency at the pre-deployment phase and impact the operation of AI systems in the field.

4. When developing military AI technologies, the determination of system goals, parameters, training datasets and other design choices should be made in accordance with what is legally and morally permissible. Industry should actively involve end-users, legal advisors, ethicists, and other relevant experts in the design and development of AI-driven technologies for the military.

5. International law frameworks of State and individual responsibility are powerful tools for AI governance prior to deployment and to deter harmful conduct. They are not only necessary to allocate responsibility amongst actors if and when harm occurs, but also play a role to promote compliance. In particular, States have the responsibility to ensure that they, as well as private actors under their jurisdiction, design and develop AI systems in line with relevant norms of international humanitarian and human rights law.

6. It is important to acknowledge the limitations of humans as well as machines. When responsibly implemented into military practice, AI can supplement human cognitive and analytical abilities and help to improve lawful and ethical military decision making. Conversely, the performance of activities and military decisions which, for legal or moral reasons, necessarily require human engagement and judgement, should not be delegated to machines.

7. The mutual impact of humans and technologies should be recognised at all stages of the decision making cycle. Design and regulation should focus on the dynamics of human-machine relationships rather than on the isolated abilities of humans or capabilities of technical systems. When considering the implications of new technologies of warfare, the performance of humans and machines should not be evaluated in comparative terms. Instead, the combination of human and machine interactions should be evaluated on the basis of their impact on military decisions and conduct, as well as compliance with international law.

8. The use of data-driven algorithms for the identification of human targets poses serious risks of violations of international humanitarian law targeting rules and standards, including the principles of distinction, proportionality, and precautions. The identification of human targets carried out with the use or support of AI-systems can exacerbate risks of unlawful engagements. Human targets in warfare should be identified in relation to precise criteria defined by humans and aligned with international law.

9. The process of testing AI-driven technologies is crucial to minimise technical malfunctions and human errors, observe the consequences of human-AI interactions, prevent unlawful use in the field, and build trust. AI testing can include experimentation, simulated exercises, and training with military practitioners before such systems are deployed and used during active hostilities. Furthermore, AI applications used as part of the targeting process should become subject to legal review and certification under Article 36 of the First Additional Protocol to the Geneva Conventions, as well as iterative monitoring and validation during deployment.

10. States should implement educational and training programs about AI for military forces, policy makers, developers, and other relevant stakeholders. These programs should raise awareness and understanding of related risks, such as automation bias of systems operators, the capabilities and limitations of AI-driven systems, and the application of international law and ethical standards in the context of AI use in the military domain.

January 2023

Authors

 

Citation:    B. Boutin, K. Klonowska, M. Pacholska, S. Soltanzadeh, T. Woodcock, T. Zurek, ‘DILEMA Statement on the Global Governance of Artificial Intelligence in the Military’ (January 2023), available at: www.asser.nl/dilema/research/dilema-statement

Download in PDF