This open access book focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.
The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.
Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.
The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators.
Alessandro Mantelero is Associate Professor of Private Law and Law & Technology in the Department of Management and Production Engineering at the Politecnico di Torino in Turin, Italy.
Specific to this book:
- The only book until now to focus on the human rights impact assessment for AI, including social and ethical issues
- Discusses three different approaches to the regulation of AI (principle-based, risk-based and conformity-oriented)
- Investigates i.a. current and future European regulation and policy, the challenges posed by AI and its global dimension
With a foreword by Prof Joe Cannataci, former UN Special Rapporteur on the right to privacy
Excerpts from two book reviews:
In previous works, Alessandro [Mantelero] developed an ambitious and insightful model called PESIA (privacy, ethical and social impact assessment). It was a versatile framework to assess Big Data and AI. In BEYOND DATA, he evolves this framework into a comprehensive assessment that includes human rights, as well as ethics and social impact (aptly called HRESIA). It’s a powerful framework that can be operationalized to render evaluation and recommendations.
- Viktor Mayer-Schoenberger
The full review is available at: https://www.linkedin.com/feed/update/urn:li:activity:6944549815833817088/
Many aim to understand how to come to grips with AI in order to preserve the values to which we have subscribed for generations. There remain many dilemmas: how to comprehend AI's risks, harms and advantages; how to foresee trajectories of development and deployment and their effects, and how to tame the unruly beast whilst keeping it well fed. But how suitable is our familiar repertory of controls and incentives, our ways of knowing and predicting, and our calculation of the gains and losses to individuals, groups and categories of people, and to their rights and freedoms? How effective is the disjointed array of regulatory and governance instruments across the globe and at many disparate sites below that level, and how potent will it be in a future that can only dimly be seen, let alone regulated? This setting is the context into which Alessandro Mantelero’s thoughtful and constructive contribution fits. His academic experience as well as his practical advisory engagement in the policy process informs an exploration of these pressing issues of applying effective regulatory instruments to the tasks outlined above. Mantelero urges the development and adoption of Human Rights, Ethics and Social Impact Assessment (HRESIA) that goes ‘beyond data’ to comprise a holistic package for comprehending AI and shaping it in accordance with the principles and values inherent in the human rights that are enshrined in many lofty codifications as well as in everyday existence.
- Prof. Charles D. Raab
International Review of Law, Computers & Technology
The full review is available at: https://www.tandfonline.com/doi/full/10.1080/13600869.2023.2213104?src=
This is Volume 36 in the Information Technology and Law (IT&Law) Series