[Public consultation] Principles and values for military artificial intelligence

Published 17 April 2023

@shutterstock 

The DILEMA project, run by Asser senior researcher Dr Berenice Boutin, is conducting a public consultation on principles and values for military artificial intelligence (AI). The aim is to gain insights on public perception of military AI, and the principles and values that should guide the development and use of military AI. Fill in the survey here.

One of the persisting hurdles faced by current research on responsible (military) AI is the lack of consensus and shared understanding on which public values ought to be safeguarded, as well as how AI global governance and value-alignment can best be achieved.

Bridging the gap

As part of the research carried out in the DILEMA project, we aim to bridge this gap notably by setting up a platform aimed at building an interdisciplinary consensus on which public values should guide the design and use of military AI. With this public and open consultation on principles and values for military AI, we invite input from subject experts as well as society and the general public.

Please share your opinion in the short and anonymous survey.

Workshop: Critical reflections on AI ethics principles and societal inputs for the governance of military AI

On Wednesday 26 April 2023, the DILEMA Project (Asser Institute), University of Amsterdam and the Battlefield AI Project of the University of New South Wales, Canberra (Australia), will co-organise the hybrid workshop ‘Critical reflections on AI ethics principles and societal inputs for the governance of military AI’. The aim of this workshop is to address practical, legal and ethical aspects of developing and utilising societally informed ethics principles for the governance of military AI. Read more.

Confirmed speakers are:

When: 10:00 - 12:30 (CET), 18:00 – 20:30 (AEST)

Read more & registrations

About Designing International Law and Ethics into Military Artificial Intelligence (DILEMA)

The DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance with international law and accountability by design.