How artificial intelligence discriminates against people – and how it does not

Artificial intelligence is advancing around the world. But at the same time, concerns about the fairness of the algorithms that think in our stead is holding it back. Researchers are now addressing this dilemma.

Decisions made by computer algorithms are being used in more and more areas, such as preselecting job applicants or in legal proceedings. Artificial intelligence is thus being used in sensitive areas. Hence intelligent computer programs not only need to make factually correct decisions, they also need to be fair and ethically acceptable. This does not always work in practice and can lead to problematic results: an algorithm at the online retail company Amazon selected only men for its work, while an AI system used in US courts applied stricter penalties for black people than for white people. So there is a real danger of AI reproducing existing inequalities and discrimination.

An interdisciplinary NRP 77 project by the Universities of Zurich and St. Gallen and ETH Zurich is investigating ways of preventing this. Project Manager Maël Schnegg, Assistant Professor at the University of St. Gallen, explains the researchers’ methods: “We have to program an algorithm in such a way that it still does what it’s supposed to – such as making a preselection – but simultaneously set certain conditions for the output.” The aim is to make artificial intelligence more human – taking into account not only factual aspects, but also ethical ones. The main question here is where exactly to incorporate this human element, and which factors should influence the program.

To answer this question, the research project is following an interdisciplinary approach between four different subareas. At the start of the project, Stefan Feuerriegel’s group at ETH Zurich developed an algorithm that can observe rules on fairness and made it available to the other research groups for their tests. The team headed by Gerhard Schwabe at the University of Zurich is looking at the question of how the interaction between humans and machines can work in a way that increases trust in artificial intelligence. At the University of St. Gallen, Klaus Möller’s team is investigating which mechanisms would have to be introduced at company level to keep the necessary control over AI. Finally, Noëlle Vokinger at the University of Zurich and her team are looking into a possible legal framework for fair AI. Altogether this should result in guidelines on how businesses and politicians can ensure ethical treatment.

“Artificial intelligence is amazing as long as it’s used properly,” says Maël Schnegg, and points to the technology's potential: if mechanisms are in place to ensure that the algorithms are fair – and to prevent ethical conflicts – this potential could be better utilised in future.