How can artificial intelligence be made fair and equitable?

A new method makes it possible to design AI applications in a socially acceptable way since artificial intelligence must not only be at the cutting edge of technology, but also fair and non-discriminatory.
Artificial intelligence is increasingly being used to make decisions that have a profound impact on people's everyday lives, for example when filling vacant positions. This can lead to social injustice if the AI systems are not designed for fairness and equity. A research team led by Christoph Heitz (Zurich University of Applied Sciences ZHAW) has now developed the first methodology that combines philosophical concepts of justice with technical implementation strategies.
The most important findings
Fairness in AI applications is not the result of a general standard procedure, but must always be defined in a specific context. This is the research project's central message. To achieve this, the stakeholders involved must always be involved, because fairness can only be implemented appropriately in AI algorithms if different ideas of justice are taken into account.
Another prerequisite for the implementation of fairness is that developers of AI applications understand the philosophical and moral dimensions so that they can make ethically sound technical decisions. Fairness therefore needs to be integrated as an interdisciplinary topic in computer science curricula – and not just as a purely technical issue.
Significance for policy and practice
The project has created the first methodological framework for algorithmic fairness that takes into account both ethical and technical aspects: a systematic procedure that combines ethics, philosophy and computer science to implement fairness-by-design in AI-supported decision-making systems. This means that questions of fairness and equity are not only examined and, if necessary, corrected retrospectively, but are systematically built into the development of AI systems from the outset. On the one hand, the methodology developed allows a new approach to questions of social justice in AI-based decision-making systems. On the other hand, it also provides the basis for training a new generation of AI developers. The first training programmes, for example new modules on Responsible AI at the ZHAW, have already been implemented.
With the FairnessLab, a software tool has also been developed that guides users step by step through the ethical-technical analysis. It is a web-based visualisation and assessment tool that, among other features, visualises the consequences of AI application design. It provides companies with a tool that supports them in the development of fair algorithms: They can use it to analyse their own data sets, incorporate moral assessments and visualise the effects of design changes. For the authorities, the project provides practical knowledge on how requirements for fair AI - for example within the framework of the EU's “AI Act” - can be implemented.
Three main messages
Managers responsible for deploying AI-based decision systems should minimise fairness violations in algorithmic decisions, as this is key to social acceptance. It is important to involve stakeholders in order to understand their preferred ideal of social justice, and which deviations from the ideal they are willing to accept.
Developers of AI should be aware that fairness is complex – but that philosophy can help provide answers. Computer scientists need to gain this understanding, otherwise it is not possible to develop AI systems that implement adequate forms of fairness. Also, fairness should not be treated as a mere technical task. The technically possible fairness parameters should be connected with their moral substance, and developers of AI should choose on the basis of moral requirements rather than technical aspects. They should only implement a fair algorithm technique after analysing what justice requires in the specific context of its application (which is a moral, not a technical question). And transforming such requirements into a concrete implementation requires the integration of concepts from mathematics, decision theory, philosophy, and social sciences.
Computer science educators need to develop new curricula for teaching algorithmic fairness in computer science programmes. Creating socially aligned AI systems cannot be achieved through current computer science teaching. It requires a simultaneous understanding of technology and its implications for social justice – understanding the latter requires a grasp of political philosophy. They should also make sure that these curricula do not treat fairness as a mere technical task.
Further details on the methodology and background of the research project can be found on the NRP 77 project website:
Additional research projects on the topic of “Digital Transformation” within the framework of the National Research Programme NRP 77 can be found here:
