Berenice Boutin publishes in I4ADA Report

Published 15 October 2020

Project leader Dr Berenice Boutin published a short piece entitled ‘Beyond AI Ethics: International Law and Human Rights for AI Accountability’, as part of the I4ADA Accountability Paper Vision 2020: Taking Stock & Looking Forward.

The report builds on the I4ADA Hague Summit for Accountability in the Digital Age, held on 6–7 November 2019 at the Peace Palace. During the Summit, Dr Boutin moderated a Plenary Panel on Accountability and Artificial Intelligence, and a Roundtable on International Law for AI Accountability.

The text of the Dr Boutin’s contribution to the report is reproduced below.

* * *

Beyond AI Ethics: International Law and Human Rights for AI Accountability

As AI is progressively being deployed in various public domains such as healthcare, energy, welfare, border security, criminal justice, law enforcement, or defence, we must ensure that the development and use of AI technologies are guided by core democratic values and subject to legal mechanisms of accountability. To this end, established norms and processes of international law, in particular international human rights law, have an important role to play.

In recent years, the sharp advances of AI capabilities have been accompanied by a growing recognition of the need to proactively reflect on its societal implications, so as to shape the development and applications of technology in line with ethical values. Public and private institutions alike have called for a fundamental questioning on the potential impacts of AI, in order to steer AI research and policy towards beneficial outcomes, and to ultimately maintain agency over the technologies we decide to adopt.

The unfettered deployment of data-driven policy-making and algorithmic decision-making in the public sector can indeed come at the cost of many negative consequences, in terms of discrimination, privacy, due process, transparency, and accountability. For instance, the use of risk-assessment algorithms in the judicial system has led to blatant discrimination in the United States, and automated detection of welfare fraud is being litigated in the Netherlands in the SyRI case. The potentially promising and seemingly less controversial applications of AI for example to improve healthcare or energy management should as well be the subject of close reflection and scrutiny, as they are not exempt from risks and concerns.

In this context, sets of guiding principles for ethical AI and informal codes of conduct for self-regulation have proliferated. While the global efforts to reflect on AI ethics are laudable and necessary, it is time to move beyond AI ethics and towards binding legal frameworks and enforceable regulation of AI. It is not to say that new laws are needed: on the contrary, policy and regulatory efforts should primarily seek to interpret and implement existing legal frameworks.

In order to advance AI accountability, international law has a two-fold role to play. First, international law provides for established, globally agreed, actionable and enforceable standards – in particular within the human rights framework, which embodies values such as fairness, equality, dignity, and individual autonomy. Second, international institutions and processes are an ideal forum to debate and engage with possible grey areas and unsettled questions. The international legal dimension does not supplement – but complements – ethical and technical approaches to AI accountability. It is together that the ethical, legal, technical, and policy aspects must be addressed in order to achieve accountability in relation to AI.