Editor's note: Samuel
Brobby graduated from Maastricht University's Globalisation and Law LLM
specialising in Human Rights in September 2020. A special interest in
HRDD carries his research through various topics such
as: the intersection between AI and HRDD, the French Devoir de Vigilance
or mHRDD at the EU level. Since April 2021 he has joined the Asser
Institute as a research intern for the Doing Business Right project.
The recent surge in developments and debate surrounding Artificial
Intelligence (AI) have been business centric, naturally so. The conversation
has long been centred on the possible gains “digitally conscious” companies can
recoup from their sizeable investments in the various forms this technology can
take. The ink continues to flow as numerous articles are released daily;
debating between the ultimate power of artificial intelligence (and topical
subsets like machine learning) on the one hand, versus the comparatively more
philistinish views regarding what these technologies can offer on the other.
Our objective here is not to pick a side on the AI debate. Rather, we would
like to explore the Business & Human Rights implications of the development
of AI and, in particular its intersection with the human rights due diligence
(HRDD) processes enshrined in the UN Guiding Principles on Business and Human Rights and subsequent declinations. How compatible is AI with HRDD obligations? Where does AI fit into the
HRDD process? Can AI be used as a tool to further HRDD obligations? Can the
HRDD process, in return, have an effect on the elaboration and progress of AI
and its use in transnational business? And, to which extent will the roll out
of AI be affected by HRDD obligations? These are all questions we hope to tackle
in this blog.
In short, it seems two distinct shifts are occurring,
rather opportunely, in close time frames. The impending mass adoption of AI in
transnational business will have strong consequences for the state of Human
Rights. This adoption is not only substantiated by an uptick of AI in business,
but also in policy documents produced or endorsed by leading institutions such
as the ILO or the
OECD for instance. Inversely,
we must consider that HRDD obligations elaborated by the BHR community will
also have strong implications for the development and roll out of AI. These two
transformations will interact increasingly as their positions are consolidated.
It is these interactions that we wish to analyse in the two parts of this
article. Namely, the emergence of Artificial intelligence as a tool to shape
and further HRDD obligations (1) and the emergence of HRDD as a process to shape
the development of AI (2).
AI as
a tool to shape and further the HRDD process
We will begin with an analysis of how artificial intelligence can
support the HRDD process, taking a special look at how certain AI algorithms
can be harnessed to conduct HRDD. For this analysis AI can be generally understood
as defined by the European Commission’s recent AI regulation proposal as “software that is developed with one or more of
the techniques and approaches (…) and can, for a given set of human-defined
objectives, generate outputs such as content, predictions, recommendations, or
decisions influencing the environments they interact with”. HRDD is understood
as outlined in the OECD due diligence guidance for responsible business
conduct, the OHCHR Interpretative guide on the corporate responsibility to protect HR’s and
the UNGP’s. As such, this article will follow along 4 major components of the HRDD
process: Identifying potential risks, identifying adequate actions, tracking
implementation/results and grievance mechanisms. The aim is to ascertain
whether, and to which extent, AI is a tool that can be integrated into the HRDD
process.
AI’s ability to sort through, process, analyse and make conclusions from
data fits well with what is increasingly being asked of businesses in the
framework of HRDD. Identifying, preventing and tracking are all terms
synonymous with both Artificial intelligence and HRDD. What’s more, the
requirements to cease, mitigate and/or remediate adverse HR impacts of
businesses could as well benefit from the abilities of certain AI algorithms to
support efficient human decision-making. In short, it seems that theoretically there
is not an aspect of the HRDD process that could not benefit from the potential
input of AI. The first section of this article will take a deeper dive into
this consideration to show the compatibility between AI and HRDD.
In researching this article, particular care was taken to ensure that a
number of professionals in this field were consulted with the goal of espousing
theory with the reality of the current situation. The overwhelming response is
the following: The idea of implementing AI into the HRDD process is, for the
moment, very far from being put in place in practice. In truth there is no
guarantee that corporations will devote the resources required to meaningfully
integrate AI into their HRDD process, it would be optimistic to expect them do
it purely out of bona fides intentions for a better world. However, this
distance between the possibilities of AI in HRDD for the future and the current
state of play should not be an obstacle to discussion. On the contrary, these
discussions and decisions that are taken at the intersection between HRDD and
AI will have huge implications on the future state of Human Rights.
AI and
the identification and assessment of actual or potential adverse human rights
impacts
It seems
that the most natural application of AI with regards to the due diligence
process falls within the direct use of AI’s predictive capabilities to identify
adverse impacts on HR. The predictive capabilities of certain AI algorithms
fall hand in hand with the essence of this aspect of HRDD. AI has already found
applications in risk
evaluation in a whole host of
different sectors. With sufficient quality
data, AI could allow for potential Human Rights issues to be identified long
before they take place with an analogous implementation of technology, that
already exists, into HRDD.
AI’s
predictive capabilities exceed human predictive capabilities and naturally so
given the bandwidth at which AI can potentially operate, allowing it to sort
through excessively large amounts of data at lightning-fast speeds. AI can be
used to render businesses more profitable by increasing their efficiency using
predictive algorithms, but corporations could also render their businesses more
socially and environmentally viable by strengthening their HRDD process with AI
tools. To punctuate this, I would like to use 2 examples: Big data processing
through the use of machine learning and computer vision to identify and prevent
issues long before their effects are apparent.
Big data processing possibilities continue to grow as corporations move increasingly
towards integrating digital means within their operations. The increased use of
computing, connected items (such as Internet of things) and other digitally
powered items contributes to the exponential growth of data. Such data finds its sources in these increasingly connected businesses,
as it does from the different stakeholders which interact with these groups.
In-house data is supplemented by the flow of external data to form a deep pool
of valuable information. HRDD operators cannot get familiar, let alone analyse,
this inconceivably huge and ever-growing mass of data. Yet, someone or
something with access to a shared pool of this data has access to a mine of
knowledge of the very workings and impacts of a corporation and its value
chain, down to the smallest connected parts. Artificial neural
networks (ANN) or other subsets of
machine learning can be deployed to find patterns and interrelations in the
data that were previously not visible. The applications regarding risk mapping
or assessment of an enterprise’s involvement in actual or potential adverse
human rights impact (2.3 OECD guidelines) are clear in this respect. These could
involve ideas like being able to predict optimal maintenance times to avoid
breakdowns of machinery and environmental spills. It could also include being
able to compute accurately and predict the environmental effects of operations
with a view for potential vulnerable stakeholders (like employees or proximate
third parties for instance). Finally, ANN’s could likely reveal deeper
interconnections pointing to adverse human rights impacts arising from business
related activities that human eyes would not have even considered. There are a
wide range of possibilities through which AI can be applied to assist risk
assessment in the HRDD process.
Computer vision is another subset of machine
learning which entails converting digital imagery into models within which
trained AI agents are able to analyse, identify and interpret what they “see”.
This could benefit environmental due diligence for instance. This could be done
by setting camera traps able to periodically review flora, fauna, soil or water
samples in a sector relating to business activities of a corporation or its
subsidiaries to get clear indications of environmental impact of business
activities. As such, these perception-based agents can be of use to human
decision makers in the HRDD process. Data-sets are already being created and
refined in the sector of protection of endangered
species. The use of such technology to
monitor certain indicator species or environments closely related to business
activities should be considered a potential additional option for businesses in
their impact assessments. AI could provide a helping hand for human decision
makers in this respect by illuminating previously obscure information and
signalling with more precision the areas upon which corporations need to act in
their HRDD process.
AI and the identification of adequate actions to prevent
adverse human rights impacts
After identifying
potential adverse human rights impacts arising from business related activities
of a corporation directly, or through entities in its value chain, then comes
the need to identify adequate actions to prevent or mitigate their
materialization. In the case where identifying adequate action and integrating
those findings into a HRDD plan involves the activities of a corporation (or
closely related subsidiary) directly, the situation is relatively simplified.
Here, predictive AI, with the help of heuristics, can be deployed to enact a
plan that has a reasonable chance of tackling the risks, since it depends upon
entities which are under the direct control of the parent company. Complications
arise in the case of indirect business relationships down the supply chain for
which the chain of command is not direct. Herein lies an interesting possibility
to espouse AI, HRDD and the notion of leverage to potentially apply pressure
throughout the value chain.
UNGP 19 and its subsequent commentary elucidate the expectation for
businesses to use their leverage to mitigate HR risks in their global value
chains. Here, AI may have an important role to play in increasing the effectiveness
of corporations. The possibility of deploying AI to relink the value chain and
reassign responsibility to decision making entities within it, can exist,
supported by the notion of “leverage” as it is understood by the UNGP’s. Much
is made of the inability (or costly burden) to reconnect increasingly
outsourced, runaway supply chain constellations. For instance, Intel’s 2020 CSR report counts 10000+ tier-1 suppliers over 89
different countries. This number grows exponentially as you descend down the
tiers. It would be difficult to expect Intel to have a close understanding with
each supplier to the point where they could exercise the leverage required to
fulfil the expectations of UNGP 19. Here too, AI’s potential exists for
corporate groups sitting atop of their long value chains. As goods and services
are manufactured and delivered, stopping by each cog in the value chain, so too
does data. It is created, added, shared and transferred from the smallest
connected parts of supply chains right through to delivery for the end
consumer. The possibility of returning upstream to identify links in the chain
should therefore exist.
Development
of Fuzzy logic algorithms could be interesting in this respect. This subset of
symbolic AI may offer a technological platform for principles of prioritisation
and proportionality, associated to leverage and HRDD, to take form. Fuzzy logic
can be understood in opposition to “crisp” computer logic which confers a
definitive “yes or no” answer to a given problem. Fuzzy logic can allow for the
determination of nuance and degrees of truth within a given situation. This
could be of use in the hugely complex global value chains characterised by its
huge number of moving parts. Research on the use of
such algorithms in supply chain management does
exist, however to my knowledge, not focusing on applying it to HRDD. Here,
fuzzy logic could play a role in determining which entities to put under
pressure (or leverage) and which impacts to prioritise to ensure the maximum
efficiency of an action plan. Fuzzy logic algorithms could be an interesting
future HRDD tool that could help human decision makers in identifying, with
precision, the pressure points that need to be acted upon to ensure the adequacy
of their responses to human rights risks.
AI and tracking of the effectiveness of actions taken to
prevent adverse human rights impacts
The potential use of AI in the HRDD process is also visible at the stage
of tracking implementation and results. AI powered analytics could in theory
allow for a more accurate assessment of the effectiveness of the responses emanating
from the prior steps of the HRDD process. Providing human decision makers with
a broader palette of insight from which conclusions can be drawn. What’s more, given
that the HRDD process is not static, it relies on constant reassessment to
ensure that the processes in place are actually effective. AI agents analysing
the steady flow of data will be able to track implementation and ensure the compilation
of accurate results of HRDD processes undertaken by corporations. Analysis of
such implementation in turn feeds back into elaborating the best choice of
actions, potentially enabling a more effective HRDD process under the control
of human decision makers.
AI and grievance mechanisms
HRDD for the most part is a process that relies on ex-ante assessments
of potential risks in a bid to avoid them from materialising. However, the HRDD
process does not stop at the assessment of risk and the requirement of acting
upon it. HRDD carries through to the requirement of enabling access to remedy
in cases where a risk materializes. Here, AI may have its applications too,
especially if you consider the potential incentive of providing effective
internal grievance mechanisms over the increasing possibility of certain civil
liability frameworks that are being considered. What role can AI play in the
establishment of grievance mechanisms aimed at enabling access to remedy and at
tracking the effectiveness of a company’s HRDD process?
The use of anonymous complaint platforms and AI powered chat-boxes can
be an interesting starting point to enable internal discussion between
stakeholders (like employees) and the corporation for which they are involved
with. Empowering those internal voices might provide an opportunity for
corporations at the top of their supply chains to gain additional insight on
their global Human Right’s impact by identifying potential clusters. Additionally,
it may enable certain affected stakeholders to open a conversation in a view of
achieving redress. Of course, internal whistleblowing is by no means a new
creation, but it could benefit from the analysis and treatment provided for by
certain AI agents. The use of AI
to establish a failure in HRDD obligations by identifying that a corporation
knew (or ought to have known), and mitigated a given risk-turned-to-damage,
could be an interesting avenue to explore. The implementation of AI in the
prior steps leading up to a failure of a HRDD plan could naturally increase the
efficiency regarding the provision of remedy.
Conclusion
It is perhaps too early to
definitively state the place AI will occupy in the mHRDD process; luckily, this
is not the purpose of this article. This section aimed to present a preview of
the possibilities that AI can provide to further this process. To that end, the
potential integration of AI into the HRDD process seems plausible, especially in
terms of identifying potential risks and action plans for maximum effect. The
reason we can say this with confidence is because we can see AI being
implemented for risk evaluation in many sectors across the board. On the flip
side of the coin, integrating AI into HRDD does come with potential challenges.
For instance, as mentioned above, the use of data required for a number of
these processes would likely be down to consent or contractual obligation from
links in the supply chain. Inversely, widescale acceptation would reinforce the
asymmetrical nature of relations between suppliers and corporate groups sitting
atop of their chains by allowing them to withhold and process a massive amount
of data “in the name of HRDD”. In that regard, safeguards must be considered in
order to ensure that potential residual effects (monopolisation of data or AI
in support of greenwashing to name a couple) arising from a willingness to
improve HRDD do not disproportionally offset the situation in practice.
What may be required is an incentive for corporate groups to integrate
AI into HRDD. In this regard, an economy of scale argument could be interesting
to consider. If a company develops and places a centralised AIHRDD technology on
the market at a competitive rate (by offering to cut down the number of human
employees in this sector for instance) then it would be very intriguing to see how
fast it would spread. However,
at this point in time little movement has
been identified regarding the development of AI to be deployed specifically for
the HRDD process. Be it from my own research, interviews with professionals
working in and around HRDD, or interviews with academics taking an interest in
this field, I have been unable to identify traces of the existence of such an
initiative. As such, another potential effect of this publication could be to
incite and invite thought and cooperation around such a project. Early as we
may be in the adoption of AI, the nature of the HRDD process offers an
opportunity for AI integration.
With that being said, AI in itself is no panacea, it is not a remedy to
fix all of transnational business’s adverse impact on human rights. Whilst it
may potentially offer some new possibilities in terms of HRDD, AI remains a type
of technology whose effects are dependent on how we implement and use it. Whether
or not this will improve the HRDD process without significant draw backs remains
to be seen.
The first part of this contribution has attempted to show the potential
applications of AI as a tool to further the HRDD process. The following part of
this article will focus on the reverse trend. What are the risks associated
with the widespread implementation of AI in society? How will HRDD obligations
affect the development and roll out of AI? To which extent will the responsibility
of AI developers regulate the agents they create? And what role will the BHR
community have in ensuring that the practical applications of technological
progress does not come at the detriment of Human Rights?