Kiobel in The Hague – Holding Shell Accountable in Dutch Courts - Event Report - By Mercedes Hering

Editor's note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. Alongside her studies, she is working as student research assistant at the Institute for International and Foreign Private Law in Cologne. Since September 2020, she joined the Asser Institute as a research intern for the Doing Business Right project


On 25 September 2020, the final hearings in the Kiobel case took place before the Dutch District Court in The Hague. This case dates back to 25 years ago; and the claimants embarked on a judicial journey that led them from the US to the Netherlands. On 16 October 2020, the TMC Asser Institute hosted an online roundtable discussion to present and discuss the arguments raised before the Dutch court. The three panelists, Tara Van Ho from Essex University, Tom de Boer from Prakken d’Oliveira, and Lucas Roorda from Utrecht University each provided their stance on the case and analyzed the past, the present and the main issues of the proceedings.

Depending on the outcome of the case, Kiobel could pave the way for further business human rights litigation in Europe. It raises questions ranging from jurisdiction, applicable law, parent company liability and fee arrangements to state sovereignty and the responsibility of former colonial states vis à vis countries that emerged from colonial rule. Below you will find the highlights of our discussion, you can also watch the full video on the Asser Institute’s YouTube channel.More...


New Event! Kiobel in The Hague - Holding Shell Accountable in the Dutch courts - 16 October 2020 - 4-5 Pm (CET)

On Friday, 16 October, from 16.00-17.00, we will organise an online discussion about the Kiobel v. Shell case, currently before Dutch courts in the Hague. The discussion will retrace the trajectory followed by the case in reaching The Hague, explain the arguments raised by both parties in the proceedings, and assess the potential relevance of the future ruling for the wider debate on corporate accountability/liability for human rights violations. 


Background

In 1995, nine local activists from the Ogoniland region of Nigeria (the Ogoni nine) were executed by the Nigerian authorities, then under the military dictatorship of General Sani Abacha. They were protesting against the widespread pollution stemming from the exploitation of local oil resources by a Nigerian subsidiary of Royal Dutch Shell when they were arrested and found guilty of murder in a sham trial. Their deaths led first to a series of complaints against Royal Dutch Shell in the United States on the basis of the alien tort statute (ATS). One of them, lodged by Esther Kiobel, the wife of one of those killed (Dr Barinem Kiobel), reached the US Supreme Court. Famously, the Court decided to curtail the application of the ATS in situations that do not sufficiently 'touch and concern' the territory of the United States.

This ruling put an end to Esther Kiobel's US lawsuit, but it did not stop her, together with three other widows (Victoria Bera, Blessing Eawo and Charity Levula), from seeking to hold the multinational company accountable for its alleged involvement in the deaths of their husbands. Instead, in 2017, they decided to continue their quest for justice on Royal Dutch Shell’s home turf, before Dutch courts in The Hague. 25 years after the death of the Ogoni nine, the court in The Hague just finished hearing the pleas of the parties and will render its much-awaited decision in the coming months.


Confirmed speakers

  • Tom de Boer (Human rights lawyer representing the claimants, Prakken d'Oliveira)  
  • Lucas Roorda (Utrecht University)
  • Tara van Ho (Essex University) 
  • Antoine Duval, Senior researcher at the T.M.C Asser Instituut, will moderate the discussion 


 Register here to join the discussion on Friday.

Doing Business Right – Monthly Report – May & June 2019 - By Shamistha Selvaratnam & Maisie Biggs

Doing Business Right – Monthly Report – May & June 2019

 

Editor’s note: Shamistha Selvaratnam is a LLM Candidate of the Advanced Masters of European and International Human Rights Law at Leiden University in the Netherlands. Prior to commencing the LLM, she worked as a business and human rights solicitor in Australia where she specialised in promoting business respect for human rights through engagement with policy, law and practice. Maisie Biggs graduated with a MSc in Global Crime, Justice and Security from the University of Edinburgh and holds a LLB from University College London. She is currently working with the Asser Institute in The Hague. She has previously worked for International Justice Mission in South Asia and the Centre for Research on Multinational Corporations (SOMO) in Amsterdam.

 

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the coverage provided on our twitter feed @DoinBizRight and on various websites. You are invited to contribute to this compilation via the comments section below, feel free to add links to important cases, documents and articles we may have overlooked.

 

The Headlines

Dutch Court allows Case against Shell to Proceed

On 1 May the Hague District Court rules that it has jurisdiction to hear a suit brought against the Royal Dutch Shell by four Nigerian widows. The widows are still seeking redress for the killing of their husbands in 1995 in Nigeria. They claim the defendants are accomplices in the execution of their husbands by the Abasha regime. Allegedly, Shell and related companies provided material support, which led to the arrests and deaths of the activists. Although Shell denies wrongdoing in this case, the Court has allowed the suit to proceed. The judgment is accessible in Dutch here. An English translation is yet to be provided.

The Netherlands Adopts Child Labour Due Diligence Law

On 14 May the Dutch Government passed legislation requiring certain companies to carry out due diligence related to child labour in their supply chains. The law applies to companies that are either registered in the Netherlands that sell or deliver goods or services to Dutch consumers or that are registered overseas but sell or deliver goods or services to Dutch consumers. These companies will have to submit a statement declaring that they have due diligence procedures in place to prevent child labour from being used in the production of their goods or services.

While it is not yet clear when the law will come into force, it is unlikely to do so before 1 January 2020. The Dutch law is part of the growing movement to embed human rights due diligence into national legislative frameworks. The law is accessible in Dutch here.

First case under the French Due Diligence law initiated against Total

French NGOs Amis de la Terre FR and Survie have initiated civil proceedings against French energy company Total for the planned Tilenga mining project in Uganda. These organisations and CRED, Friends of the Earth Uganda and NAVODA have sent a formal notice to Total in relation to concerns over the potential expropriation of people in proximity to the site of the Tilenga project and threats to the environment. Information on the case from the initiating civil society organisations can be found here. This is the first initiated case under the new French Due Diligence law, and may act as a test case for future litigation.

In a similar vein, civil society organisations CCFD-Terre Solidaire and Sherpa have launched Le Radar du Devoir de Vigilance [The Vigilance Duty Radar], a resource to track the compliance of French companies to the law. The site lists potentially subjected companies, and their published vigilance plans (or lack thereof).

Bolstering the UK Modern Slavery Act

During a speech at the International Labour Organisation’s centenary conference on 11 June 2019, Theresa May outlined the UK Government’s further commitments to strengthen the Modern Slavery Act 2015; these included a central public registry of modern slavery transparency statements by businesses (in a similar vein to the Gender Pay Gap Service), and the extension of reporting requirements to the public sector. Individual ministerial departments will be obliged to publish modern slavery statements from 2021, while central Government has committed to publish voluntarily this year. The focus on public sector procurement will apparently also include a “new programme that will improve responsible recruitment in parts of our public sector supply chains that pass through Asia.”

The Final Report of the Independent Review of the Modern Slavery Act 2015 was released in May, and considered in Westminster Hall on 19th June. More...

Doing Business Right – Monthly Report – February 2018 - By Shamistha Selvaratnam

Editor’s note: Shamistha Selvaratnam is a LLM Candidate of the Advanced Masters of European and International Human Rights Law at Leiden University in the Netherlands. Prior to commencing the LLM, she worked as a business and human rights solicitor in Australia where she specialised in promoting business respect for human rights through engagement with policy, law and practice.

 

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the coverage provided on our twitter feed @DoinBizRight and on various websites. You are invited to contribute to this compilation via the comments section below, feel free to add links to important cases, documents and articles we may have overlooked.

 

The Headlines

German Development Ministry drafts mandatory human rights due diligence

It was reported on 10 February 2019 that the German Federal Ministry of Economic Cooperation and Development has drafted legislation (unpublished) on mandatory human rights due diligence for German companies. It is reported that the law will apply to companies with over 250 employees and more than €40 million in annual sales. The draft legislation targets, inter alia, the agriculture, energy, mining, textile, leather and electronics production sectors. Companies that fall within the scope of the legislation will be required to undertake internal risk assessments to identify where human rights risks lie in their supply chains. Companies would also be required to have a Compliance Officer to ensure compliance with due diligence requirements. The Labor Inspectorate, the Federal Institute for Occupational Safety and Health and the Human Rights Commissioner of the Federal Government would be responsible for enforcing the legislation, with penalties for non-compliance of up to €5 million (as well as imprisonment and exclusion from public procurement in Germany).

Kiobel case heard in the Netherlands

On 12 February 2019, the Dutch courts heard a lawsuit involving Esther Kiobel and three other women against Shell. The plaintiffs allege that Shell was complicity in the 1995 killings of their husbands by Nigeria’s military. The husbands were Ogoni activists that were part of the mass protests against oil pollution in Nigeria’s Ogoniland. The judgment is expected to be handed down in May 2019. Read more here. More...

Doing Business Right – Monthly Report – March & April 2018 - By Abdurrahman Erol

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the daily coverage provided on our twitter feed @DoinBizRight and on various websites. You are invited to complete this compilation via the comments section below. Feel free to add links to important cases, documents and articles we might have overlooked.


The Headlines

Shell-Eni Bribery Case: On 5 March, the corporate bribery trial against oil companies Shell and Eni was postponed to 14 May by a court in Milan, Italy.  The charges against the companies are bribery and corruption in the 2011 purchase of a Nigerian offshore oilfield, one of the most valuable oilfields in Africa. Although both firms denied the charges, the corruption watchdog Global Witness claimed that hundreds of millions of dollars had been paid to Nigeria’s former president and his former oil minister as pocket bribes. Global Witness calls the case one of the biggest corruption scandals in the history of the oil sector. The trial in the Milan court is expected to last 12-18 months.

Jesner v. Arab Bank: On 24 April, in a 5-4 vote, the US Supreme Court ruled in the Jesner v. Arab Bank case that foreign corporations cannot be brought before US courts under the Alien Tort Statute (ATS). Between 2004 and 2010, thousands of foreign nationals sued Arab Bank under the ATS, claiming that the Bank’s officials allowed money transfers through the New York branch of the Bank to Hamas who committed violent acts in Israel and Occupied Palestinian Territories. The Supreme Court held that foreign corporations cannot be sued under the ATS. Furthermore, the Court claimed that international law today does not recognize “a specific, universal, and obligatory norm of corporate [tort] liability”, which is a prerequisite to bringing a lawsuit under the ATS. In the Court’s lead opinion, Justice Kennedy stated that "Courts are not well suited to make the required policy judgments that are implicated by corporate liability in cases like this one.” In her dissenting opinion joined by three other justices, Justice Sotomayor claimed that the decision "absolves corporations from responsibility under the ATS for conscience-shocking behavior."

Fifth Anniversary of Rana Plaza: April 24th also marked the fifth anniversary of the deadly collapse of Rana Plaza in Dhaka, Bangladesh. Rana Plaza was a five-story commercial building which housed several garment factories employing around 5000 people. The global outcry after the disaster which claimed at least 1134 lives led to numerous initiatives to change business-as-usual in the garment and textile supply chains in Bangladesh and beyond. Despite these initiatives which employed various approaches to the issue of worker safety in the supply chains, it is widely acknowledged that there is still a long way to go to create a safe working environment for workers in the garment and textile supply chains. On 12 April, the Asser Institute hosted a one-day conference on Rana Plaza to take stock of the regulatory and policy initiatives aimed at improving workers’ safety in the garment supply chain (You will find our background paper here).

 Okpabi v. Royal Dutch Shell - Episode. 3? On 27 April, more than 40 UK and international human rights, development and environment NGOs, later supported by academics from different states, urged the UK Supreme Court to allow two Nigerian fishing communities to appeal against the Okpabi v Royal Dutch Shell ruling of the Court of Appeal in February which denied responsibility for UK-based Royal Dutch Shell for the pipeline spills, dating back as far as 1989, which affected approximately 40000 Nigerian farmers and fishermen. The NGOs claimed that the Court of Appeal’s decision erred in many ways as it seriously restricts parent company liability and limits the options available to victims of corporate human rights violations seeking remedy in the UK.More...


Doing Business Right – Monthly Report – February 2018 - By Catherine Dunmore

Editor's Note: Catherine Dunmore is an experienced international lawyer who practised international arbitration for multinational law firms in London and Paris. She recently received her LL.M. from the University of Toronto and her main fields of interest include international criminal law and human rights. Since October 2017, she is part of the team of the Doing Business Right project at the Asser Institute.

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the daily coverage provided on our twitter feed @DoinBizRight. You are invited to complete this survey via the comments section below, feel free to add links to important cases, documents and articles we might have overlooked.

The Headlines

Okpabi v Royal Dutch Shell: Court of Appeal finds Shell not liable for Nigerian oil spills

On 14 February 2018, the Court of Appeal in London handed down its Approved Judgment in Okpabi and others v Royal Dutch Shell Plc and another [2018] EWCA Civ 191. The claimants are 40,000 Nigerian farmers and fisherman from the Ogale and Bille communities in the Niger Delta who allege they have suffered from decades of pollution from pipelines belonging to Shell Nigeria, a subsidiary of the British-Dutch multinational oil and gas company Shell. Indeed, in 2011 the United Nations Environmental Programme published an Environmental Assessment of Ogoniland which reported serious contamination of agricultural land and waterways in the community as well as its groundwater at rates 1,000 times higher than permitted under Nigerian law, exposing Ogale’s inhabitants to serious health risks. Meanwhile the Bille community suffered the largest loss of mangrove habitat in the history of oil spills at 13,200 hectares. In its split decision, the Court of Appeal upheld the High Court ruling that it lacks jurisdiction as London headquartered parent company Shell could not be liable for any oil pollution in the Niger Delta caused by its wholly autonomous subsidiary. The villagers now plan to seek permission to take the case to the Supreme Court, with King Okpabi of the Ogale Community stating “We have lost our environment, our farmland and our dignity because of Shell’s operations in our community. The English Courts are our only hope because we cannot get justice in Nigeria. So let this be a landmark case, we will go all the way to the Supreme Court”.

Philippines Commission on Human Rights holding overseas hearings for oil majors

The Republic of the Philippines Commission on Human Rights is set to confront oil majors over their climate change impact through hearings in Manila, New York and London. The hearings are in response to a petition lodged in 2015 which seeks to hold forty-seven companies accountable for Philippine communities suffering from extreme weather. Human Rights Commissioner Roberto Cadiz explained that holding hearings overseas will make the process inclusive, affording all carbon companies the best chance to confront the impact of their businesses. To date, half of the companies, whose products generated around a fifth of historic greenhouse gas emissions, have not responded to the Commission. Those which have responded, questioned the Commission’s jurisdiction or argued that it was for governments and not private companies to tackle climate change. Several international law experts have also filed amicus curiae briefs in support of the petition which back the Commission’s mandate to investigate private companies over harm experienced by Filipinos. The hearings are due to commence in Manila in March 2018, with the overseas sessions following later in the year. The Commission cannot directly impose penalties on any of the respondents; however, it could recommend ways that the companies might alleviate their future operations’ human rights impact.

Tomasella v Nestlé: Consumers sue Nestlé for child labour chocolate

On 12 February 2018, consumer Danell Tomasella filed a Class Action Complaint in Case No. 1:18-cv-10269 in the Massachusetts federal court. The lawsuit against Swiss food and beverage conglomerate Nestlé USA Inc. alleges that the company regularly imports cocoa beans from suppliers in the Ivory Coast and engages in deceptive marketing by hiding that this chocolate supply chain utilises child and slave labour. The plaintiffs claim that in violation of Massachusetts Consumer Protection Law, Nestlé does not disclose its Ivory Coast suppliers’ reliance on the worst forms of child labour which is of material interest to American consumers. They state that “Nestlé has not required its suppliers to remedy this human tragedy” and that it instead continues to be unjustly enriched by the profits from chocolate sales. The allegations highlight that much of the world’s chocolate is “quite literally brought to us by the backbreaking labor of children, in many cases under conditions of slavery”. Nestlé has responded that such consumer class actions “are not the way to solve such a serious and complex issue as forced child labor”, rather “class action lawyers are targeting the very organizations trying to fight forced labor”. More...

Doing Business Right – Monthly Report – December 2017 - By Catherine Dunmore

Editor's Note: Catherine Dunmore is an experienced international lawyer who practised international arbitration for multinational law firms in London and Paris. She recently received her LL.M. from the University of Toronto and her main fields of interest include international criminal law and human rights. Since October 2017, she is part of the team of the Doing Business Right project at the Asser Institute.

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the daily coverage provided on our twitter feed @DoinBizRight. You are invited to complete this survey via the comments section below, feel free to add links to important cases, documents and articles we might have overlooked. More...

Doing Business Right – Monthly Report – November 2017 - By Catherine Dunmore

Editor's Note: Catherine Dunmore is an experienced international lawyer who practised international arbitration for multinational law firms in London and Paris. She recently received her LL.M. from the University of Toronto and her main fields of interest include international criminal law and human rights. Since October 2017, she is part of the team of the Doing Business Right project at the Asser Institute.

Introduction

This report compiles all relevant news, events and materials on Doing Business Right based on the daily coverage provided on our twitter feed @DoinBizRight. You are invited to complete this survey via the comments section below, feel free to add links to important cases, documents and articles we might have overlooked. More...

A Quest for justice: The ‘Ogoni Nine’ legal saga and the new Kiobel lawsuit against Shell. By Sara Martinetto

Editor's note: Sara Martinetto is an intern at T.M.C. Asser Institute. She has recently completed her LLM in Public International Law at the University of Amsterdam. She holds interests in Migration Law, Criminal Law, Human Rights and European Law, with a special focus on their transnational dimension.


On 29th June 2017, four Nigerian widows launched a civil case against Royal Dutch Shell (RDS), Shell Petroleum N.V., the Shell Transport and Trading Company, and its subsidiary Shell Petroleum Development Company of Nigeria (SPDC) in the Netherlands. Esther Kiobel, Victoria Bera, Blessing Eawo and Charity Levula are still seeking redress for the killing of their husbands in 1995 in Nigeria. They claim the defendants are accomplices in the execution of their husbands by the Abasha regime. Allegedly, the companies had provided material support, which then led to the arrest and death of the activists.  

In the light of this lawsuit, it is interesting to retrace the so-called ‘Ogoni Nine’ legal saga. The case saw the interplay between multiple jurisdictions and actors, and its analysis is useful to point out some of the main legal issues encountered on the path to hold corporations accountable for human rights abuses. More...


Doing Business Right Blog | Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.

I am not convinced that inherently evil technology exists, rather, bad business models perpetuate and accentuate existing problems. AI is no exception to this phenomenon and diligent discussion is required to ensure that the negative impacts of artificial intelligence are meticulously scrutinised. In the end, transparency, responsibility and accountability must be ensured around technology that has the power to be an important tool for Human Rights and to provide support for development across every sector of society.  Given that this very same technology, if used irresponsibly, has the power to compound and accelerate the very issues we would like it to help solve, it is the intention of this blog to raise further questions and continue to provide discussion surrounding AI and responsibility. In the first part of this publication, I discussed how AI has the potential to contribute to HRDD by being technologically integrated into the process. However, before AI will even be considered as a possible tool to aid in the HRDD process, it will play a large part in making businesses more profitable. It will also be used by civil society, States and State-backed institutions in the pursuit of their respective goals.

AI and its declinations are, and will, continue to be deployed in a number of sectors including, marketing, healthcare, social media, recruitment, armed conflicts and many more. Thus, given that AI has the potential for contributing negatively to Human Rights and the environment, it is important to discuss the risks and potential legal challenges surrounding AI and responsibility. Identifying these is crucial to the goal of taming AI in an attempt to mitigate some of the potential negative impacts it may have on Human Rights. The pervasive nature of this technology along with the particular place AI developers hold in supply chains warrants some attention. As such, this section aims at analysing the HRDD obligations of AI developing businesses. To do so, we will illustrate some of the Human Rights (and environmental) risks linked to the creation of these AI agents before looking at the manner through which ex ante responsibility through HRDD can be applied to AI developing businesses in the creation and commercialisation of AI algorithms.


AI and Human Rights risks

In principle, it seems that the effects of AI agents are felt very far (be it in the spatial or temporal sense) from the point of creation of these same agents. This is problematic in terms of delineating the responsibility of AI developers who are far removed from the negative impacts they have a hand in instigating. The literature on the Human Rights and Environmental risks surrounding AI is quite extensive. This sub-section aims at presenting some of the risks linked to the use of AI in transnational business to illustrate the capacity for AI to negatively impact Human Rights.

Perhaps the most common risk evoked regarding AI and Human Rights is the problem of algorithmic bias. This refers to the manner through which AI may unintentionally perpetuate and, subsequentially deepen, inherent human/societal prejudices by producing discriminatory results. These biases are transmitted via training models and data sets that are “fed” to AI agents. In the end, these biased results are reproduced and reinforced through a continuous feedback loop. The seemingly ever-present nature of algorithmic biases poses some real problems in terms of responsibility. The examples are numerous and vary in nature, such as the Syri case which caused an uproar in the Netherlands. This big data analysis system was designed to be deployed in neighbourhoods with the objective of identifying potential risk-profiles in relation to fraudulent social welfare claims. Its use targeted disadvantaged neighbourhoods on the basis of a list of possible suspects elaborated by Syri. It’s “trawling method” meant that once deployed, it would comb through data connected to every resident in that area in order to flag inconsistencies between social welfare claims and actual living situations, without notifying the residents that were subjected to it. February 5th 2020 saw the District Court of the Hague render a potentially far reaching ruling, which provided (amongst other things) that such technology contravenes the right to respect for private and family life (article 8 of the ECHR), citing a “special responsibility” for signatory states in the application of new technologies. The potential for identification of “fraudsters” (none of which were actually found using Syri) could not counterbalance the infringements of convention rights that the use of this algorithm would lead to. The strategic choice to bring the case on the basis of Article 8 of the ECHR should not detract from the discriminatory nature of Syri which could potentially have been challenged on the basis of article 14 (Prohibition of discrimination). Phillip Alston’s amicus curiae brief touches on the manner through which the violations of the right to private and family life are compounded by the discriminatory targeting of areas with “higher concentrations of poorer and vulnerable groups”. Other examples of algorithmic bias leading to discriminatory outcomes are numerous. They include the discriminatory facial recognition algorithms developed by Amazon to help law enforcement, the use of AI in recruiting or its application in healthcare. As seen in the Syri case above, AI also contains some well documented risks in terms of privacy.

The acquisition and use of AI agents for the purposes of mass surveillance may be an illustration of AI developers pandering to the market to the detriment of Human Rights. The issue of pandering is linked to the near-sighted short termism solely designed to increase profits. By pandering to these short-term goals without a view for the long-term impact of AI, the path we cut for AI, and later responsibility, can only be reactive. Here we may consider, for example, the recent reports citing EU based companies selling surveillance tools, such as facial recognition technology to key players in the Chinese mass surveillance mechanism. Despite being aware of the potential violations that this technology could lead to and, in spite of the potential Human Rights abuses that its use could facilitate, these companies elected to proceed. The subsequent Human Rights consequences of the use of these technologies for mass emotional analysis to aid law enforcement or network cameras to survey the Xinjiang Uyghur Autonomous Region (XUAR) are well known. Less so, is the responsibility of AI developers in facilitating these violations.

It must remain in mind, however, that the distance (be it spatial or temporal) between the creation of a new AI algorithm and its contribution to Human Rights violations or environmental damages can at times be quite large indeed. These algorithms are created and then subsequently modified, sold and used in a number of ways that further blur and diffuse any hope for a simple solution in terms of responsibility.

In short, the risks that are carried by AI, or facilitated by its use are considerable. In a report to the General assembly, the UN Working Group on Business and Human Rights clarified that due diligence requirements are “commensurate to the severity and likelihood of the adverse impact. When the likelihood and severity of an adverse impact is high, then due diligence should be more extensive”. Despite this, the risks that were identified in this section, and indeed by many long before this article, have not yet been met with heightened HRDD obligations. The next section aims at providing some elements to clarify the ex-ante responsibility of AI developers to conduct HRDD.


Subjecting AI to HRDD: Ex-ante Responsibility

The Human Rights risks related to the development of AI can be put into two categories. The first relates to internal risks that are inherent to the way AI functions following the creation stage, these include algorithmic bias, privacy issues, or environmental costs of training and computation to name a few. The second relates to external risks that AI developers are exposed to at the stage of commercialisation. Here the issue of pandering is salient since it leads to the development and sale of AI agents to actors which could, reasonably foreseeably, use the technology in a manner that is adverse to Human Rights. The ex-ante responsibility of AI developers through HRDD will be looked at through these lenses. HRDD at the point of origin (creation stage) and HRDD at the point of arrival (commercialisation/sale).

HRDD at the creation stage of AI:

Several inherent risks have been identified with regards to AI agents. Given the knowledge of these inherent pitfalls to the technology, HRDD must be conducted at the point of origin to identify and deal with their existence.

Whilst we can acknowledge AI presents some new issues that must be solved, we may recognize that the issue of AI’s human rights impact is by no means a radically new one. In fact, the UNGPs offer a framework for apprehending these issues. UNGP 13b calls on businesses to “[s]eek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts”. As BSR’s paper series Artificial Intelligence: A Rights-Based Blueprint for Business remarks: “This means that data-sets, algorithms, insights, intelligence, and applications should be subject to proactive human rights due diligence”. It also means that the HRDD process is not solely reserved to AI engineers. The process would have to be undertaken by all relevant instances within AI developing businesses that contribute to the elaboration of an AI agent. These include management, the marketing department or data brokers to name a few. From this point, the question of proximity between AI developing businesses and adverse human rights impact that are subsequently felt far down the line may begin to be apprehended. HRDD obligations requiring undertakings to identify, assess, prevent, cease, mitigate, monitor, communicate, account for, address and remediate potential and/or actual adverse impacts on human rights and the environment can reduce the space of corporate irresponsibility. A contraction of this space between AI developing businesses and adverse Human Rights & environmental impacts downstream would help hold the former accountable for the latter. This is especially true if accompanied by a robust liability regime that holds these entities legally responsible for the impacts of their creations.

AI developers can best assess the viability of their algorithms in search of a given result. The main driver here is often whether or not this AI agent solves a given problem with sufficient accuracy. To this effect commercial interests are at the wheel, naturally so. However, the turn to integrating  ethics into AI along with an increase in attention towards assessing Human rights impacts are becoming important parameters in this sector. This may be in part thanks to increasing acceptation of HRDD as a method to regulate business activities. The additional threat carried by a potential introduction of a robust liability mechanism (perhaps in the form of an upcoming EU mHRDD legislation) could strengthen this dynamic further. The reasoning being that if sanctions are imparted for products presenting avoidable systemic biases, or any other inherent defects leading to adverse impacts for which corporate groups will be subsequently liable, then more attention will be focused on preventing such harms. Indeed, if businesses operate as rational actors in a system where Human Rights or environmental impacts incur a real cost, then this seems like a natural consequence. As such, ideas like introducing the obligation for AI developers to develop a bias impact statement or include environmental impact assessments as part of an AI due diligence would be an interesting place to begin. This process would benefit from the inclusion of different potentially affected stakeholders as well as potential vulnerable populations in the process of testing and creating AI agents. The resulting AI impact statement carrying the weaknesses and subsequent risks of a given algorithm could be subject to publication in order to increase transparency or, be required to be acknowledged by the buyer of an AI algorithm.

HRDD at the stage of commercialisation of AI:

The manner in which AI is deployed hugely affects its capacity to impact Human Rights. For instance, the use of computer vision and language processing to identify and remove content aimed at promoting terrorism or racism certainly has its positive applications. The same technology may also have the potential to lead to strong violations of freedom of expression. Whilst these violations can arise as a consequence of AI agents being insufficiently accurate or error prone, they may also arise intentionally through the use of ill doing actors. As a consequence, it is of vital importance that AI producers consider the point of arrival of their technology as a key source of human rights risks as part of their HRDD process.

AI producers find themselves in an intriguing position in this regard. Given the current talent gap and the very high technicity involved in their field, producers are in a strong bargaining position, unlike say producers of garment. This means that AI developers, as suppliers of relatively rare and sophisticated technology, can leverage, or at the very least influence, where their AI agents will be put to use. This might not be the case in the long-term as the supply of AI specialists will likely increase to catch up with current demand at some point. However, the fact that AI developers are currently in a position of relative strength is of vital relevance to the current content of their obligation to conduct HRDD in the process of selling their product. Thus, the HRDD process of AI developers must concern itself with the sale of AI agents to ensure that their algorithms are not being put in the hands of actors which could (reasonably) foreseeably generate adverse Human Rights impacts.

A parallel can be drawn between the sale of AI and weapons to demonstrate the importance of HRDD at the point of arrival. The connection between the high capacity to negatively impact Human Rights and a heightened need for responsibility mentioned prior is intuitive, though not currently implemented in the case of AI. In that conceptual vein, the Arms Trade Treaty (ATT) which aims to regulate the international trade in conventional arms, provides several restrictions on the possibility to export weapons on the basis of an export assessment. One of these conditions concerns the case in which the seller is informed that the weapons would be used to�� commit or facilitate a serious violation of international human rights law”. Setting the actual impact of the ATT in regulating arms trade aside, the notion of Buyer Due Diligence it proposes for weapon-selling states may have an analogous application for AI developers. Similarly to weaponry that (fairly obviously) does not mean that AI does not have a whole set of legally justified uses. It does, however, mean that the HRDD process of AI developers should be more directly focused on assessing buyers than, for example, the HRDD process introduced by garment manufacturers.


Conclusion

This contribution aims at highlighting the manner through which HRDD and AI will likely interact with each other in the near future. If AI is as pervasive as it is expected to be and presents itself as a general-purpose technology which will permeate all aspects of our society then it must be watched very closely. We know some of the pitfalls it carries internally in terms of bias, opacity or privacy to name a few. External pressure will further compound these. The UNGPs and the HRDD process enshrined therein provide an authoritative vantage point to apprehend the responsibility of AI developers. As I have argued, the due diligence process should be focused particularly at the point of origin (creation of an AI agent) and the point of arrival (buyer due diligence) of the AI in question.

As the EU continues to press forward with general mHRDD legislation, the idea of introducing a sector specific set of  hard HRDD requirements for AI similar to what we see with the EU conflict ore regulation or the EU Timber regulation, whilst interesting to consider, seems unlikely. As such, in light of the unique inherent issues that are linked to the development and sale of AI, the work of the OECD in the elaboration of sector-specific due diligence guidance could be extremely valuable. Taking AI’s huge reach, it’s polymorphous nature and its incredible speed of development into consideration; the flexibility and potential reactivity of soft law presents itself as a good match to further clarify the HRDD process of AI developers. Coupling the non-binding guidance from legitimate institutions like the OECD, along with hard legislative measures in the form of EU mHRDD legislation may provide AI developers with the tools required to navigate the complex shifting terrain of responsibility before them. Additionally, stitching a comprehensive liability regime for failure in HRDD would, in my view, be vital to ensure the efficacy of HRDD.  Although, the considerable distance between the development of AI, its sale and the occurrence of a damage as a result of its use by the end user will likely see a multitude of complex legal challenges arise as a consequence. Questions in terms of establishing causality or providing elements of proof (especially if the burden of proof remains on the claimants) are particularly salient. It is precisely these types of complex questions that must be answered in light of implementing a functioning system of human rights responsibility of AI developers. Whether or not this happens still remains to be seen as developments at the EU level for mHRDD are keenly awaited.

The potential contribution of AI to the HRDD process seems clear as posited in the first part of this blog. Indeed, if HRDD is non static, continuous and preventive then it seems entirely possible that AI would be called upon in an attempt to enhance this process at some point. This is especially true if you consider AI’s prowess in terms of risk assessment, which is a key aspect of HRDD. Inversely the boundaries set by HRDD along with the possibility of developing civil liability mechanisms will also affect the shape of AI in the future. In light of AI’s potential societal impact, it seems reasonable to expect those who develop it to be held to a high threshold of responsibility for its negative impacts.

Comments are closed