Publication date: 
2023/06/28
Artificial intelligence (AI) poses a number of risks, while offering many opportunities across society. However, there has been no comprehensive analysis of AI in the context of human rights. Starting in 2021, joint research by FEL CTU, MUNI, Ambis and prg.ai aims to do just that.

The project "Artificial Intelligence and Human Rights: risks, opportunities and regulation" is funded by the Czech Technology Agency and brings together technology and legal experts. The researchers set out to identify the root causes of the risk of human rights violations across all phases of the AI lifecycle. And they didn't stop at just evaluation. The outcome of the project will be a set of recommendations on how to develop, use and regulate AI technologies in a way that does not threaten human rights, but instead helps their development and protection. At the same time, they are also exploring ways to further deploy AI where automation could help protect human rights.

Artificial intelligence as a good servant but a bad master

"AI can significantly help, for example, in monitoring compliance with and enforcing human rights in the area of corporate accountability, strengthening the right to a fair trial by eliminating delays in proceedings, protecting health and many others," says Martina Šmuclerová, project coordinator at Ambis, who also works at Sciences Po in Paris.

But it also highlights the risks. "For example, unrepresentative input data and unbalanced processing can introduce so-called algorithmic bias, or discriminatory bias, into AI technology. This can manifest itself, for example, in job selection procedures, police surveillance based on facial recognition mechanisms or in the assessment of individuals' creditworthiness in banking. Even the deployment of an AI system in a non-target operating environment, such as autonomous vehicles, can introduce the risk of human rights violations."

Researchers also believe that the inexplicability and opacity of AI models can be a problem. If a person objecting a human rights violation does not have access to the information and evidence on the basis of which the AI made its decision, the right to a fair trial may be compromised.

Legal regulation of risky AI systems
Legal regulation of AI-based systems is currently being prepared at the European Union level in the form of the Artificial Intelligence Act. It was introduced in April 2021 as part of a wider EU initiative to strengthen the regulatory framework for new technologies. The European Commission's proposal, which passed the European Parliament's vote on 14 June 2023, focuses on the registration and regulation of high-risk systems that may pose a threat to the fundamental rights and security of EU citizens.

"The risk of deploying automation systems lies in the quality of their processing and the task they solve. This is not changing with the advent of AI. However, the areas of human activities and data processing that can be automated are expanding significantly," explains Luboš Král from the Centre for Artificial Intelligence at FEL CTU.

Although the obligation to respect human rights already arises from international legal norms, research to date has revealed that more than 60 percent of the Czech companies surveyed that supply AI solutions and products do not address human rights issues in the context of AI at all and are not aware of these risks (see Partial Research Report). Moreover, the rapid development of technologies and their deployment in various areas brings completely new challenges, the practical solutions to which are still being developed.

"What we are lacking now is education. This is the goal of our project," says Lukáš Kačena from the prg.ai initiative, which has long been dedicated to building awareness of artificial intelligence. "The potential impact on the user is also significant and it is only a matter of time before the first complaints and lawsuits for human rights violations appear in the Czech Republic, as is already happening in the world and in Europe, so prevention is necessary," Mr. Kučera describes the seriuos situation.

Recommendations for companies and public administration
So what changes are the scientists proposing specifically? The aim of the project is not to create new legal norms, but to provide practical guidance to all actors in the AI lifecycle to translate human rights norms already in force and binding in our society into automated systems. The key is to establish a risk assessment mechanism that will be at the core of two sets of recommendations for commercial actors and for government. The researchers have identified 38 potential human rights risks at all stages of AI system development and operation, and for each they offer options for prevention and elimination.

At the same time, they proposed the deployment of different types of AI technologies with adequate functionality for 16 selected major areas of human rights violations in society. In doing so, the researchers will also recommend a package of potential deployment areas and possible AI applications that will contribute to the further strengthening and protection of human rights. It will also involve defining the institutional and competency framework, including a map of access to remedies.

"The sets of recommendations will help the practical implementation of legally binding standards, making our project complementary to the ongoing EU initiative. The AI Act as a general regulation does not address the specific technical implementation of how human rights risks are to be prevented in a particular deployment of AI technology. Soft law tools such as codes of conduct, various forms of certification, etc. will play a key role. And it is this level that the results of our research will serve," explains Jakub Míšek from the Institute of Law and Technology at Masaryk University.

The project's interim findings, set in a scholarly framework, are published in the book Artificial Intelligence and Human Rights, which will be published by the prestigious Oxford University Press in early September. A second workshop with key stakeholders in the field of AI and human rights will also take place in autumn 2023, which is open to companies and government organisations to participate in the finalisation of the forthcoming set of recommendations.

Web: https://prg.ai/projekty/ai-lidska-prava/

Partial research report: https://prg.ai/wp-content/uploads/2023/02/AI-LP_ResearchReport2022.pdf

Contact person: 
Name: 
Radovan Suk
E-mail: 
SUKRADOV@FEL.CVUT.CZ