The Norwegian Transparency Act 2021 – An important step towards human rights responsibilities for corporations - By Nora Kenan

Editor’s note: Nora Kenan has been an intern at the Asser Institute for the past five months and is about to complete her LL.B. in International & European Law at The Hague University of Applied Sciences. Upon graduating, she will proceed with a Master’s in human rights at the University of Utrecht.

 

The Norwegian Transparency Act [1](‘Åpenhetsloven’), also known as the ‘Act on Business Transparency and Work with Fundamental Human Rights and Decent Work’ was proposed in April 2021. Now, two months later, the Act has officially been adopted by the Norwegian government and represents yet another mandatory due diligence initiative which has been trending across various jurisdiction in the recent years. The Act will require all large and medium-size corporations in Norway to disclose the measures taken to ensure the respect for human rights throughout their entire supply chain.

Various Norwegian organizations have been campaigning for years in favor of such a law. The official preparations began in 2017, when the Parliament (‘Regjeringen’) requested the Government (‘Stortinget’) to explore the possibility of introducing a law that would oblige companies to inform consumers about the steps that they take to follow up on various human rights responsibilities. The Government appointed a law firm as well as a group of experts, the Ethics Information Committee, to conduct thorough research on the matter, and to investigate whether there were any other legal obligations standing in the way of a proposal of this kind, such as for example EEA-obligations or bilateral/multilateral agreements. As a result of this research, it was concluded that there was indeed room for imposing human rights obligations on corporations. Shortly after, the Ethics Information Committee published a report in which they proposed the introduction of a due diligence legislation – more specifically, the Transparency Act. The Act consists of fifteen paragraphs (§)[2], and each paragraph has a commentary which further describes how it should be interpreted and applied.[3]

The objective of the law is essentially to promote corporate respect of human rights and decent working conditions in the production of goods and provision of services, as well as to ensure public access to information on the steps taken by corporations to safeguard these goals (§1). By making this information public, individuals and stakeholders in general are given the chance to directly question the activities of a company. More...

Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.

I am not convinced that inherently evil technology exists, rather, bad business models perpetuate and accentuate existing problems. AI is no exception to this phenomenon and diligent discussion is required to ensure that the negative impacts of artificial intelligence are meticulously scrutinised. In the end, transparency, responsibility and accountability must be ensured around technology that has the power to be an important tool for Human Rights and to provide support for development across every sector of society.  Given that this very same technology, if used irresponsibly, has the power to compound and accelerate the very issues we would like it to help solve, it is the intention of this blog to raise further questions and continue to provide discussion surrounding AI and responsibility. In the first part of this publication, I discussed how AI has the potential to contribute to HRDD by being technologically integrated into the process. However, before AI will even be considered as a possible tool to aid in the HRDD process, it will play a large part in making businesses more profitable. It will also be used by civil society, States and State-backed institutions in the pursuit of their respective goals.

AI and its declinations are, and will, continue to be deployed in a number of sectors including, marketing, healthcare, social media, recruitment, armed conflicts and many more. Thus, given that AI has the potential for contributing negatively to Human Rights and the environment, it is important to discuss the risks and potential legal challenges surrounding AI and responsibility. Identifying these is crucial to the goal of taming AI in an attempt to mitigate some of the potential negative impacts it may have on Human Rights. The pervasive nature of this technology along with the particular place AI developers hold in supply chains warrants some attention. As such, this section aims at analysing the HRDD obligations of AI developing businesses. To do so, we will illustrate some of the Human Rights (and environmental) risks linked to the creation of these AI agents before looking at the manner through which ex ante responsibility through HRDD can be applied to AI developing businesses in the creation and commercialisation of AI algorithms. More...

Artificial Intelligence and Human Rights Due Diligence – Part 1. Integrating AI into the HRDD process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.


The recent surge in developments and debate surrounding Artificial Intelligence (AI) have been business centric, naturally so. The conversation has long been centred on the possible gains “digitally conscious” companies can recoup from their sizeable investments in the various forms this technology can take. The ink continues to flow as numerous articles are released daily; debating between the ultimate power of artificial intelligence (and topical subsets like machine learning) on the one hand, versus the comparatively more philistinish views regarding what these technologies can offer on the other. Our objective here is not to pick a side on the AI debate. Rather, we would like to explore the Business & Human Rights implications of the development of AI and, in particular its intersection with the human rights due diligence (HRDD) processes enshrined in the UN Guiding Principles on Business and Human Rights and subsequent declinations. How compatible is AI with HRDD obligations? Where does AI fit into the HRDD process? Can AI be used as a tool to further HRDD obligations? Can the HRDD process, in return, have an effect on the elaboration and progress of AI and its use in transnational business? And, to which extent will the roll out of AI be affected by HRDD obligations? These are all questions we hope to tackle in this blog.

In short, it seems two distinct shifts are occurring, rather opportunely, in close time frames. The impending mass adoption of AI in transnational business will have strong consequences for the state of Human Rights. This adoption is not only substantiated by an uptick of AI in business, but also in policy documents produced or endorsed by leading institutions such as the ILO or the OECD for instance. Inversely, we must consider that HRDD obligations elaborated by the BHR community will also have strong implications for the development and roll out of AI. These two transformations will interact increasingly as their positions are consolidated. It is these interactions that we wish to analyse in the two parts of this article. Namely, the emergence of Artificial intelligence as a tool to shape and further HRDD obligations (1) and the emergence of HRDD as a process to shape the development of AI (2). More...


Corporate (Ir)Responsibility Made in Germany - Part III: The Referentenentwurf: A Compromise à la Merkel - By Mercedes Hering

Editor’s Note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. In September 2020, she joined the Asser Institute as a research intern for the Doing Business Right project.

 

I. What happened so far

It took Ministers Heil (Labour, SPD), Müller (Development, CSU) and Altmaier (Economy, CDU) 18 months to agree on a draft for the Lieferkettengesetz (Supply Chain Law) to be presented soon to the German Bundestag for legislative debates. For an overview of the different proposals put forward by the Ministries and NGOs, and political discussion surrounding them, please check my previous blogs, which you can find here and here. You can also watch the panel discussion on the Lieferkettengesetz that we organized in November 2020 with Cornelia Heydenreich (Germanwatch), Miriam Saage-Maaß (European Centre for Constitutional and Human Rights), and Christopher Patz (European Coalition for Corporate Justice).

On 15 February 2021 the government’s “final” draft was published – the so-called “Referentenentwurf”. This initial agreement was met with relief from all parties involved, as it was preceded by a long-lasting deadlock. At first, Minister for Economic Affairs, Peter Altmaier, blocked Cabinet meetings so that the government position paper (“Eckpunkteplan”) published by Ministers Heil and Müller could not be discussed. Afterwards, Altmaier again blocked a compromise proposal brought forward by Müller and Heil in Cabinet. The matter went up to the “Koalitionsausschuss”, the committee that negotiates if members of the coalition parties cannot reach an agreement. This committee failed to come to an agreement. The issue of civil liability and the scope of application were the most controversial points. Thereafter, the matter reached the “Chefetage”, Angela Merkel. She sat down with the three ministers involved and Olaf Scholz, Vice-Chancellor and Minister for Finance (SPD), and tried to mediate between the different positions. The group met twice before, eventually, an agreement was reached resulting in the Referentenentwurf of 15 February 2021. The agreement did not last for long. Peter Altmaier withdrew (again) his support for the draft just after it had been circulated.

On 28 March 2021, another “final” draft was published. Those two drafts differ in subtle but impactful aspects. This blog post was originally based on the first draft; its text has been amended to integrate the changes introduced in the second draft. The second Referentenentwurf is the one signed off by Cabinet on 3 March 2021. In this blog, I will first summarize the main points of the draft(s), and afterwards review the various critical points raised against it.More...


The unequal impact of COVID-19 in the global apparel industry - Part. II: Strategies of rebalancing – By Mercedes Hering

Editor’s note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. In September 2020 she joined the Asser Institute as a research intern for the Doing Business Right project.


My previous blog post depicted how economic asymmetry of power translates into imbalanced contractual relationships. At the moment, supply chain contracts ensure that value is extracted while precarity is outsourced. In other words, supply chains can be described as ‘global poverty chains’. In this blog post, I will present and assess four potential way to alleviate this asymmetry and to better protect the right of the poorest garment workers in the context of the Covid-19 the pandemic. More...


The unequal impact of COVID-19 in the global apparel industry - Part I: The contractual roots - By Mercedes Hering

Editor’s note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. In September 2020 she joined the Asser Institute as a research intern for the Doing Business Right project.

 

The Covid-19 pandemic is straining global supply chains and exposes the inequality that underlies them. As many countries entered lockdowns, the economy was brought to a rapid halt. This caused demand for apparel goods to plummet. Global apparel brands, in turn, have begun to disengage from business relationships with their suppliers. Lead firms cancelled or even breached their contracts with suppliers (often relying on force majeure or hardship), suspended, amended or postponed orders already made. This practice had a devastating effect on suppliers.

This situation again shows that the contractual structure of global supply chains is tilted towards (often) European or North American lead firms. In this blog, I will first outline the power imbalance embedded in global supply chain contracts. Secondly, I will outline how order cancellations impact suppliers and their workers. In Part II, I will go through four approaches to mitigate the distress of suppliers and their workers and to allow the parties to reach solutions which take into account their seemingly antagonistic interests. More...

Corporate (ir)responsability made in Germany – Event report - By Mercedes Hering

Editor's note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. Alongside her studies, she is working as student research assistant at the Institute for International and Foreign Private Law in Cologne. Since September 2020, she joined the Asser Institute as a research intern for the Doing Business Right project

On 27 November 2020, the T.M.C Asser Institute hosted an online roundtable discussion on the German Supply Chain Law (Lieferkettengesetz). The full recording of the event can be seen here:

The three panelists, Cornelia Heydenreich from Germanwatch, Miriam Saage-Maaß from the ECCHR and Christopher Patz from the ECCJ reflected on the political framework surrounding the debate, current drafts, and Germany’s role in the European discussion on binding due diligence legislation.

I. The pathway to a Lieferkettengesetz 

As Heydenreich pointed out, civil society’s role in the struggle for a Lieferkettengesetz can barely be overstated. When in 2011, the UNGPs were passed, Germany was in no rush to implement binding due diligence legislation. Instead, the German legislators waited for their European counterparts to come forward with an action plan. It was in 2013 when a new – more left-leaning – government first voiced the idea that a national action plan should be drawn up. In 2015, consultations began. The consultation process was a dialogue, the drafting process itself was not. Even though the monitoring methodology fell short of civil society’s expectations, the result of the monitoring process was shocking nonetheless: Only 13-17% of companies complied with the National Action Plan. 

It became clear that the government needed to implement binding due diligence regulation. It also became clear that the drafting process would have to begin as soon as possible for a law to be passed before the general election in September 2021. 

II. Current drafts

Saage-Maaß turned to the different proposals for a Lieferkettengesetz: The government’s position paper from the Ministry of Development and the Ministry of Labour as well as civil society’s model law. Contrary to what the government currently envisages, Saage-Maaß emphasized the need to include small or medium-sized companies that operate in high-risk areas. 

The role of private international law must not be neglected. The question turns on whether or not the whole of the Lieferkettengesetz will be an overriding mandatory provision, or merely the due diligence obligation itself. 

Civil society organizations are particularly critical of so-called “safe harbor” provisions. These safe harbor provisions allow companies to be exempted from liability if they are part of certain multi-stakeholder initiatives (MSIs). All panelists agree, however, that as of today, no MSI meets the standards set out by the OECD. In its report, the Institute for Multi-Stakeholder Initiative Integrity (MSI Integrity) comes to the same conclusion: “MSIs are not effective tools for holding corporations accountable for abuses, protecting rights holders against human rights violations, or providing survivors and victims with access to remedy.” 

For an overview of other aspects of the legislative proposals, such as the burden of proof, please see the foregoing blog series “Corporate (Ir)responsibility Made in Germany”

III. EU-wide discussion

In April 2020, European Commissioner for Justice, Didier Reynders, announced that the Commission commits to legislation on mandatory due diligence. Patz emphasizes the positive impact Germany’s Council Presidency, beginning July 2020, has had on the endeavor. Germany’s Council Presidency stands out because of its strong affirmative call for a supply chain law and for reforms of directors’ duties. At the beginning of December, the Council published its Conclusion on Human Rights and Decent Work in Global Supply Chains, where it calls on the European Commission to launch an EU Action Plan by 2021 (n. 45) and to table a proposal for an EU legal framework on corporate due diligence (n. 46). According to Patz, this constitutes a strong political signal. This strong call is reinforced by three Committees, the Human Rights CommitteeDevelopment Committee, and the Legal Affairs Committee, that also spoke out in favor of civil liability. 

Another strong political signal was sent by the EU Fundamental Rights Agency, which in its report “Business and Human Rights – Access to Remedy” called for significant changes pertaining to the reversal of the burden of proof, class actions and procedural mechanisms in order to facilitate access to justice for those affected. 

The work of German MEP Anna Cavazzini (Greens) should be highlighted, too. In the European Parliament she pushed for an additional enforcement mechanism in the form of trade restrictions. Products that benefitted from human rights abuses along the supply chain should not have access to the European single market. In order for the trade restrictions to be lifted, remediation ought to be paid. This initiative counters criticism from civil society that points out that due diligence laws often have the effect of targeting whole sectors of one particular economy. Adopting additional trade restrictions allows for a much more targeted approach. 

In her report on an anti-deforestation legal framework, Delara Burkhardt(S&D) also advocated for civil liability. Companies that exercise control over companies should be held liable, even where it was not directly them, but the other company that committed an unlawful act. In order for this liability mechanism to be effective, Burkhardt advocates for a presumption in favor of control. This helps to balance the information deficit litigants suffer because they do not have access to internal corporate documentation. 

IV. Conclusion 

At the beginning of the roundtable discussion, Duval pointed out that Germany’s stance on any binding due diligence regulation will be decisive. Germany’s role in the EU-wide discussion can hardly be overstated. Germany amounts to 30% of all EU exports, and to 20% of all imports. Factoring in France’s loi de vigilance, both countries together could put enough pressure on the European legislators to push for an EU-wide mandatory due diligence regulation. 

Germany is as close as it has ever been to adopting a Lieferkettengesetz. Yet, the process has come to a halt. The government position paper should have been discussed in the Cabinet at the end of last year for the law to be adopted in 2021. All ministers have to agree, afterwards the proposition will go to Parliament. Heydenreich said that the law will have to be adopted in May, or June the latest; Parliamentary session ends in July. 

At least Germany’s involvement in the EU-wide debate looks promising. Germany’s Council Presidency as well as individual German MEPs have had a tremendous impact on the adoption of an EU-wide due diligence regulation.

New Event! Corporate (ir)responsibility made in Germany - 27 November - 3pm (CET)

On 27 November, we will host a digital discussion on Germany’s approach to corporate (ir)responsibility for human rights violations and environmental harms in the supply chains of German businesses. This event aims to analyse the evolution of the business and human rights policy discussion in Germany and its influence on the wider European debates on mandatory human rights due diligence EU legislation. Germany is the EU’s economic powerhouse and a trading giant, hence its position on the (ir)responsibility of corporations for human rights risks and harms throughout their supply chains has major consequences for the EU and beyond.

Background

Currently, Germany is debating the adoption of a supply chain law or Lieferkettengesetz. This would mark the end of a long political and legal struggle, which started in 2016, when the German government adopted its National Action Plan (NAP) 2016-2020. Germany’s NAP, like many others, counted on voluntary commitments from businesses to implement human rights and environmental due diligence throughout their supply chains. Unlike other NAP’s, the German one also included a monitoring process, which tracked the progress businesses made during that four-year period.

The final report, which was published in September, showed that only roughly 13-17% of German businesses implemented the voluntary due diligence measures encouraged in the NAP. On the basis of these rather disappointing results, as required by the coalition agreement between the two governing parties, a draft for a Lieferkettengesetz should have been presented to the Cabinet this autumn. However, the Ministry for Economic Affairs and Energy, backed by business lobby groups, strongly opposes any form of civil liability for human rights violations committed within supply chains and managed until now to delay the process.

Our discussion aims to review these developments and highlight the key drivers behind the (slow) movement towards a Lieferkettengesetz. Weaving political insights with legal know-how, our speakers will provide a comprehensive overview (in English) on Germany’s positioning in the business and human rights discussion and its potential influence on the future trajectory of a European legislation.

Speakers:

Moderator:


To register for this event, please click here. You will receive a link before the start of the event.


For enquiries, contact conferencemanager@asser.nl


Winter academy: Due diligence as a master key to responsible business conduct

On 25-29 January 2021, The Asser Institute’s ‘Doing business right’ project is organising an online winter academy on ‘Doing business right: Due diligence as a master key to responsible business conduct’.

This academy brings together students, academics and professionals from around the world and provides a deep dive into the due diligence process as a strategy to achieve responsible business conduct.

Learn more and register here. 

Call for Papers - Delocalised Justice: The transnationalisation of corporate accountability for human rights violations originating in Africa - Deadline 15 January 2021

More than twenty years ago nine local activists from the Ogoni region of Nigeria were executed by the then military dictatorship. The story of the Ogoni Nine does not stop in Nigeria; the tale of the nine men, the many lives lost, and the environmental degradation linked to the extraction of oil in the region by Shell has quite literally travelled the world. What is often commonly referred to as the Kiobel case—after the application lodged by Esther Kiobel, the widow of Dr. Barinem Kiobel—originated in Nigeria, has been heard by courts in the USA, and is currently before Dutch courts. The Kiobel case, as well as a flurry of other cases (e.g. the Bralima case before the Dutch NCP, the Nevsun case before the Canadian courts, the Vedanta case before the UK courts, or the Total case before the French courts, among others), embodies the flight of corporate accountability cases out of their original African contexts.

This transnational quest for an effective remedy by those who’s human and/or environmental rights have been violated is understandable, but it also raises serious questions about the consequences of the delocalisation of access to remedies in such cases. This conference aims to provide a forum for critical discussions of the justifications for, and consequences of, using various delocalised ‘sites of justice’ for human and environmental rights violations associated with ‘doing business’ in Africa. The aim is not to focus on Kiobel or Nigeria in particular, although contributions on this case are welcome, but to generally engage in a critical examination of cases that ‘migrate’ between different sites of justice, and the associated benefits and drawbacks of the displacement of corporate accountability out of African courts to courts or non-judicial mechanisms (such as OECD National Contact Points) based in the so-called Global North. In doing so, we strongly encourage applicants to consider a variety of (critical) theoretical perspectives in the analysis of this phenomenon.

In this collaboration between Asser Institute’s Doing Business Right project and AfronomicsLaw, we welcome contributions from scholars working on African international law, African perspectives of international/transnational law, as well as scholars working on business and human rights more generally. The aim is to bring a plurality of voices into conversation with each other, and to generate original (and critical) engagements with the operation of transnational justice in the business and human rights space. With important developments taking place at the international level, such as the drafting of a binding Treaty on Business and Human Rights, the preparation of European legislation on mandatory human rights due diligence, as well as the emergence of the African Continental Free Trade Area (AfCFTA), which is set to foster business across African borders, such discussions are not only timely, they are also necessary.


Deadlines and requirements:

In order to increase engagement from a broader range of actors from the continent, the conference will be bilingual, English and French. The conference presentations and outputs will also be accepted in either language (2,000 word blog post as part of a special symposium on AfronomicsLaw, as well as a full-length paper for a special issue with a journal (journal tbd)).


Overview of deadlines:

  • Deadline for abstract submission: 15 January 2021
  • Draft papers due: 1 March 2021
  • Digital conference: 24-26 March 2021
  • Final contribution to blog symposium on AfronomicsLaw: 30 April 2021
  • Final papers due for special issue with journal: 1 July 2021


Please submit abstracts in English or French (250 words) accompanied by a short CV (max. 5 pages) to m.plagis@asser.nl by 23:59 CET on 15 January 2021.

Kiobel in The Hague – Holding Shell Accountable in Dutch Courts - Event Report - By Mercedes Hering

Editor's note: Mercedes is a recent graduate of the LL.B. dual-degree programme English and German Law, which is taught jointly by University College London (UCL) and the University of Cologne. She will sit the German state exam in early 2022. Alongside her studies, she is working as student research assistant at the Institute for International and Foreign Private Law in Cologne. Since September 2020, she joined the Asser Institute as a research intern for the Doing Business Right project


On 25 September 2020, the final hearings in the Kiobel case took place before the Dutch District Court in The Hague. This case dates back to 25 years ago; and the claimants embarked on a judicial journey that led them from the US to the Netherlands. On 16 October 2020, the TMC Asser Institute hosted an online roundtable discussion to present and discuss the arguments raised before the Dutch court. The three panelists, Tara Van Ho from Essex University, Tom de Boer from Prakken d’Oliveira, and Lucas Roorda from Utrecht University each provided their stance on the case and analyzed the past, the present and the main issues of the proceedings.

Depending on the outcome of the case, Kiobel could pave the way for further business human rights litigation in Europe. It raises questions ranging from jurisdiction, applicable law, parent company liability and fee arrangements to state sovereignty and the responsibility of former colonial states vis à vis countries that emerged from colonial rule. Below you will find the highlights of our discussion, you can also watch the full video on the Asser Institute’s YouTube channel.More...


Doing Business Right Blog | Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.

I am not convinced that inherently evil technology exists, rather, bad business models perpetuate and accentuate existing problems. AI is no exception to this phenomenon and diligent discussion is required to ensure that the negative impacts of artificial intelligence are meticulously scrutinised. In the end, transparency, responsibility and accountability must be ensured around technology that has the power to be an important tool for Human Rights and to provide support for development across every sector of society.  Given that this very same technology, if used irresponsibly, has the power to compound and accelerate the very issues we would like it to help solve, it is the intention of this blog to raise further questions and continue to provide discussion surrounding AI and responsibility. In the first part of this publication, I discussed how AI has the potential to contribute to HRDD by being technologically integrated into the process. However, before AI will even be considered as a possible tool to aid in the HRDD process, it will play a large part in making businesses more profitable. It will also be used by civil society, States and State-backed institutions in the pursuit of their respective goals.

AI and its declinations are, and will, continue to be deployed in a number of sectors including, marketing, healthcare, social media, recruitment, armed conflicts and many more. Thus, given that AI has the potential for contributing negatively to Human Rights and the environment, it is important to discuss the risks and potential legal challenges surrounding AI and responsibility. Identifying these is crucial to the goal of taming AI in an attempt to mitigate some of the potential negative impacts it may have on Human Rights. The pervasive nature of this technology along with the particular place AI developers hold in supply chains warrants some attention. As such, this section aims at analysing the HRDD obligations of AI developing businesses. To do so, we will illustrate some of the Human Rights (and environmental) risks linked to the creation of these AI agents before looking at the manner through which ex ante responsibility through HRDD can be applied to AI developing businesses in the creation and commercialisation of AI algorithms.


AI and Human Rights risks

In principle, it seems that the effects of AI agents are felt very far (be it in the spatial or temporal sense) from the point of creation of these same agents. This is problematic in terms of delineating the responsibility of AI developers who are far removed from the negative impacts they have a hand in instigating. The literature on the Human Rights and Environmental risks surrounding AI is quite extensive. This sub-section aims at presenting some of the risks linked to the use of AI in transnational business to illustrate the capacity for AI to negatively impact Human Rights.

Perhaps the most common risk evoked regarding AI and Human Rights is the problem of algorithmic bias. This refers to the manner through which AI may unintentionally perpetuate and, subsequentially deepen, inherent human/societal prejudices by producing discriminatory results. These biases are transmitted via training models and data sets that are “fed” to AI agents. In the end, these biased results are reproduced and reinforced through a continuous feedback loop. The seemingly ever-present nature of algorithmic biases poses some real problems in terms of responsibility. The examples are numerous and vary in nature, such as the Syri case which caused an uproar in the Netherlands. This big data analysis system was designed to be deployed in neighbourhoods with the objective of identifying potential risk-profiles in relation to fraudulent social welfare claims. Its use targeted disadvantaged neighbourhoods on the basis of a list of possible suspects elaborated by Syri. It’s “trawling method” meant that once deployed, it would comb through data connected to every resident in that area in order to flag inconsistencies between social welfare claims and actual living situations, without notifying the residents that were subjected to it. February 5th 2020 saw the District Court of the Hague render a potentially far reaching ruling, which provided (amongst other things) that such technology contravenes the right to respect for private and family life (article 8 of the ECHR), citing a “special responsibility” for signatory states in the application of new technologies. The potential for identification of “fraudsters” (none of which were actually found using Syri) could not counterbalance the infringements of convention rights that the use of this algorithm would lead to. The strategic choice to bring the case on the basis of Article 8 of the ECHR should not detract from the discriminatory nature of Syri which could potentially have been challenged on the basis of article 14 (Prohibition of discrimination). Phillip Alston’s amicus curiae brief touches on the manner through which the violations of the right to private and family life are compounded by the discriminatory targeting of areas with “higher concentrations of poorer and vulnerable groups”. Other examples of algorithmic bias leading to discriminatory outcomes are numerous. They include the discriminatory facial recognition algorithms developed by Amazon to help law enforcement, the use of AI in recruiting or its application in healthcare. As seen in the Syri case above, AI also contains some well documented risks in terms of privacy.

The acquisition and use of AI agents for the purposes of mass surveillance may be an illustration of AI developers pandering to the market to the detriment of Human Rights. The issue of pandering is linked to the near-sighted short termism solely designed to increase profits. By pandering to these short-term goals without a view for the long-term impact of AI, the path we cut for AI, and later responsibility, can only be reactive. Here we may consider, for example, the recent reports citing EU based companies selling surveillance tools, such as facial recognition technology to key players in the Chinese mass surveillance mechanism. Despite being aware of the potential violations that this technology could lead to and, in spite of the potential Human Rights abuses that its use could facilitate, these companies elected to proceed. The subsequent Human Rights consequences of the use of these technologies for mass emotional analysis to aid law enforcement or network cameras to survey the Xinjiang Uyghur Autonomous Region (XUAR) are well known. Less so, is the responsibility of AI developers in facilitating these violations.

It must remain in mind, however, that the distance (be it spatial or temporal) between the creation of a new AI algorithm and its contribution to Human Rights violations or environmental damages can at times be quite large indeed. These algorithms are created and then subsequently modified, sold and used in a number of ways that further blur and diffuse any hope for a simple solution in terms of responsibility.

In short, the risks that are carried by AI, or facilitated by its use are considerable. In a report to the General assembly, the UN Working Group on Business and Human Rights clarified that due diligence requirements are “commensurate to the severity and likelihood of the adverse impact. When the likelihood and severity of an adverse impact is high, then due diligence should be more extensive”. Despite this, the risks that were identified in this section, and indeed by many long before this article, have not yet been met with heightened HRDD obligations. The next section aims at providing some elements to clarify the ex-ante responsibility of AI developers to conduct HRDD.


Subjecting AI to HRDD: Ex-ante Responsibility

The Human Rights risks related to the development of AI can be put into two categories. The first relates to internal risks that are inherent to the way AI functions following the creation stage, these include algorithmic bias, privacy issues, or environmental costs of training and computation to name a few. The second relates to external risks that AI developers are exposed to at the stage of commercialisation. Here the issue of pandering is salient since it leads to the development and sale of AI agents to actors which could, reasonably foreseeably, use the technology in a manner that is adverse to Human Rights. The ex-ante responsibility of AI developers through HRDD will be looked at through these lenses. HRDD at the point of origin (creation stage) and HRDD at the point of arrival (commercialisation/sale).

HRDD at the creation stage of AI:

Several inherent risks have been identified with regards to AI agents. Given the knowledge of these inherent pitfalls to the technology, HRDD must be conducted at the point of origin to identify and deal with their existence.

Whilst we can acknowledge AI presents some new issues that must be solved, we may recognize that the issue of AI’s human rights impact is by no means a radically new one. In fact, the UNGPs offer a framework for apprehending these issues. UNGP 13b calls on businesses to “[s]eek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts”. As BSR’s paper series Artificial Intelligence: A Rights-Based Blueprint for Business remarks: “This means that data-sets, algorithms, insights, intelligence, and applications should be subject to proactive human rights due diligence”. It also means that the HRDD process is not solely reserved to AI engineers. The process would have to be undertaken by all relevant instances within AI developing businesses that contribute to the elaboration of an AI agent. These include management, the marketing department or data brokers to name a few. From this point, the question of proximity between AI developing businesses and adverse human rights impact that are subsequently felt far down the line may begin to be apprehended. HRDD obligations requiring undertakings to identify, assess, prevent, cease, mitigate, monitor, communicate, account for, address and remediate potential and/or actual adverse impacts on human rights and the environment can reduce the space of corporate irresponsibility. A contraction of this space between AI developing businesses and adverse Human Rights & environmental impacts downstream would help hold the former accountable for the latter. This is especially true if accompanied by a robust liability regime that holds these entities legally responsible for the impacts of their creations.

AI developers can best assess the viability of their algorithms in search of a given result. The main driver here is often whether or not this AI agent solves a given problem with sufficient accuracy. To this effect commercial interests are at the wheel, naturally so. However, the turn to integrating  ethics into AI along with an increase in attention towards assessing Human rights impacts are becoming important parameters in this sector. This may be in part thanks to increasing acceptation of HRDD as a method to regulate business activities. The additional threat carried by a potential introduction of a robust liability mechanism (perhaps in the form of an upcoming EU mHRDD legislation) could strengthen this dynamic further. The reasoning being that if sanctions are imparted for products presenting avoidable systemic biases, or any other inherent defects leading to adverse impacts for which corporate groups will be subsequently liable, then more attention will be focused on preventing such harms. Indeed, if businesses operate as rational actors in a system where Human Rights or environmental impacts incur a real cost, then this seems like a natural consequence. As such, ideas like introducing the obligation for AI developers to develop a bias impact statement or include environmental impact assessments as part of an AI due diligence would be an interesting place to begin. This process would benefit from the inclusion of different potentially affected stakeholders as well as potential vulnerable populations in the process of testing and creating AI agents. The resulting AI impact statement carrying the weaknesses and subsequent risks of a given algorithm could be subject to publication in order to increase transparency or, be required to be acknowledged by the buyer of an AI algorithm.

HRDD at the stage of commercialisation of AI:

The manner in which AI is deployed hugely affects its capacity to impact Human Rights. For instance, the use of computer vision and language processing to identify and remove content aimed at promoting terrorism or racism certainly has its positive applications. The same technology may also have the potential to lead to strong violations of freedom of expression. Whilst these violations can arise as a consequence of AI agents being insufficiently accurate or error prone, they may also arise intentionally through the use of ill doing actors. As a consequence, it is of vital importance that AI producers consider the point of arrival of their technology as a key source of human rights risks as part of their HRDD process.

AI producers find themselves in an intriguing position in this regard. Given the current talent gap and the very high technicity involved in their field, producers are in a strong bargaining position, unlike say producers of garment. This means that AI developers, as suppliers of relatively rare and sophisticated technology, can leverage, or at the very least influence, where their AI agents will be put to use. This might not be the case in the long-term as the supply of AI specialists will likely increase to catch up with current demand at some point. However, the fact that AI developers are currently in a position of relative strength is of vital relevance to the current content of their obligation to conduct HRDD in the process of selling their product. Thus, the HRDD process of AI developers must concern itself with the sale of AI agents to ensure that their algorithms are not being put in the hands of actors which could (reasonably) foreseeably generate adverse Human Rights impacts.

A parallel can be drawn between the sale of AI and weapons to demonstrate the importance of HRDD at the point of arrival. The connection between the high capacity to negatively impact Human Rights and a heightened need for responsibility mentioned prior is intuitive, though not currently implemented in the case of AI. In that conceptual vein, the Arms Trade Treaty (ATT) which aims to regulate the international trade in conventional arms, provides several restrictions on the possibility to export weapons on the basis of an export assessment. One of these conditions concerns the case in which the seller is informed that the weapons would be used to�� commit or facilitate a serious violation of international human rights law”. Setting the actual impact of the ATT in regulating arms trade aside, the notion of Buyer Due Diligence it proposes for weapon-selling states may have an analogous application for AI developers. Similarly to weaponry that (fairly obviously) does not mean that AI does not have a whole set of legally justified uses. It does, however, mean that the HRDD process of AI developers should be more directly focused on assessing buyers than, for example, the HRDD process introduced by garment manufacturers.


Conclusion

This contribution aims at highlighting the manner through which HRDD and AI will likely interact with each other in the near future. If AI is as pervasive as it is expected to be and presents itself as a general-purpose technology which will permeate all aspects of our society then it must be watched very closely. We know some of the pitfalls it carries internally in terms of bias, opacity or privacy to name a few. External pressure will further compound these. The UNGPs and the HRDD process enshrined therein provide an authoritative vantage point to apprehend the responsibility of AI developers. As I have argued, the due diligence process should be focused particularly at the point of origin (creation of an AI agent) and the point of arrival (buyer due diligence) of the AI in question.

As the EU continues to press forward with general mHRDD legislation, the idea of introducing a sector specific set of  hard HRDD requirements for AI similar to what we see with the EU conflict ore regulation or the EU Timber regulation, whilst interesting to consider, seems unlikely. As such, in light of the unique inherent issues that are linked to the development and sale of AI, the work of the OECD in the elaboration of sector-specific due diligence guidance could be extremely valuable. Taking AI’s huge reach, it’s polymorphous nature and its incredible speed of development into consideration; the flexibility and potential reactivity of soft law presents itself as a good match to further clarify the HRDD process of AI developers. Coupling the non-binding guidance from legitimate institutions like the OECD, along with hard legislative measures in the form of EU mHRDD legislation may provide AI developers with the tools required to navigate the complex shifting terrain of responsibility before them. Additionally, stitching a comprehensive liability regime for failure in HRDD would, in my view, be vital to ensure the efficacy of HRDD.  Although, the considerable distance between the development of AI, its sale and the occurrence of a damage as a result of its use by the end user will likely see a multitude of complex legal challenges arise as a consequence. Questions in terms of establishing causality or providing elements of proof (especially if the burden of proof remains on the claimants) are particularly salient. It is precisely these types of complex questions that must be answered in light of implementing a functioning system of human rights responsibility of AI developers. Whether or not this happens still remains to be seen as developments at the EU level for mHRDD are keenly awaited.

The potential contribution of AI to the HRDD process seems clear as posited in the first part of this blog. Indeed, if HRDD is non static, continuous and preventive then it seems entirely possible that AI would be called upon in an attempt to enhance this process at some point. This is especially true if you consider AI’s prowess in terms of risk assessment, which is a key aspect of HRDD. Inversely the boundaries set by HRDD along with the possibility of developing civil liability mechanisms will also affect the shape of AI in the future. In light of AI’s potential societal impact, it seems reasonable to expect those who develop it to be held to a high threshold of responsibility for its negative impacts.

Comments are closed