Publication date: 
Artificial intelligence (AI) now powers a range of systems from recommending content on social media, to machine vision in self-driving cars, to deciding whether or not to take out a loan at the bank. These examples clearly demonstrate the profound impact AI algorithms have. Their satisfactory functioning is therefore crucial for individuals and for society as a whole. The design of explainable and transparent AI algorithms will be addressed by the international AutoFair project, which has been awarded a grant of €3,840,846 (just under CZK 95 million) under the European Horizon Europe programme. Of the 206 applications in the HUMAN-01 call, 46 were successful.

The three-year AutoFair project was selected for funding as the only project coordinated by an institution from the Czech Republic. Its principal investigator is Dr. Jakub Mareček from the Centre for Artificial Intelligence at the Faculty of Electrical Engineering. The grant will amount to 15 million crowns. The remaining 80 million crowns will be shared by the other seven members of the consortium, which includes scientists from prestigious universities including Imperial College London, Israel's Technion Institute of Technology and the National and Kapodistrian University in Athens. The partner institutions will be complemented by technology companies that will provide the necessary data for modelling and verify the applicability of the results in practice. These will include multinational companies and local AI start-ups.

The AutoFair project aims to ensure that AI algorithms do not side with anyone. Indeed, artificial intelligence operating as a black box whose decision-making we have no insight into poses a significant risk. An algorithm may work satisfactorily for many people, but for some it may work very badly. One strategy for dealing with this risk is to work with data. The choice of data for learning the system must be representative and not transfer inequalities in society into the development of algorithms. The opposite strategy is to consistently explain the workings and limitations of AI systems to the public. Thus, this strategy concerns the communication aspects after the actual implementation. The AutoFair project combines both of these extreme approaches: it wants to improve the algorithms themselves while educating end-users. It therefore draws on insights from computer and data science, control theory, optimisation and other disciplines, including ethics and law.

Issues related to the ethics of artificial intelligence are commonly explored in computer vision. "Many people use facial recognition systems to unlock their mobile phones. However, this has only worked reliably for white men for quite a long time, and the success rate for ethnic minorities was much lower until recently. This problem, due to the unrepresentative nature of the data, has now been rectified. However, artificial intelligence has a number of other applications where similar ethical problems still persist," explains the urgency Dr Mareček from the FEL CTU, the project's coordinator.

The project outputs will be tested in three industrial use case studies across three sectors. The first is the automation of fair assessment in recruitment, the second is the elimination of gender inequality in advertising and the third is the financial technology sector, specifically the elimination of discrimination against bank customers. The creation of the three case studies will be accompanied by expert groups composed of representatives of business, public authorities, NGOs and politicians. The views of all interest groups will be central to the research process and will increase the potential for practical application of the project results. The real-world implementation of scientific knowledge and the ethical use of AI are the main strengths of the AutoFair project. "I believe that the outputs of the project will also have impact on the planned regulation of artificial intelligence, which is being prepared by the European Commission," adds Mareček.


Contact person: 
Karolína Poliaková
+420 734 111 409