Socially acceptable and fair artificial intelligence

Artificial intelligence must be fair and equitable to be accepted by society. Scientists from ethics, computer science and management have jointly developed a novel approach to implement fairness-by-design concretely.

  • Project description (completed)

    Dropdown Icon

    The use of artificial intelligence (AI), for example in the context of decisions pertaining to personnel in companies, can lead to social injustice. The methodology developed in this interdisciplinary project allows us to understand, design, and technically implement the fairness and social justice impacts of AI applications during their development. The methodology supports the discourse among stakeholders on how AI should be designed to be socially acceptable in specific applications and empowers software developers to implement normative requirements technically. It thus connects the ethical discourse on AI with the technological implementation of AI. In developing this methodology, a joined-up approach was taken, connecting philosophical, technical, and social science aspects of AI: What do fairness and social justice mean in a specific case? How can a concept of fairness, once agreed upon, be technically implemented? What needs to be considered for a socially balanced trade-off of interests?

  • Background

    Dropdown Icon

    AI-based decision systems are increasingly becoming part of our social reality. This raises the question of how these systems can be designed to be compatible with societal norms regarding fairness and justice. The concrete design of such systems requires a systematic and reflective approach that links up ethics, technology, and social decision-making processes.

  • Aim

    Dropdown Icon

    The central goal of the project was to enable developers of AI-based decision systems to design these AI applications in such a way that they meet expectations of fairness and justice sufficiently to be socially accepted. To this end, philosophical concepts of justice were closely integrated with the technical and social reality of automated decision systems.

  • Relevance

    Dropdown Icon

    The project has created a methodologically consistent framework for algorithmic fairness that did not exist before, linking ethics and technology. The design methodology developed on this basis allows a new and systematic approach to questions of social justice in AI-based decision systems. This is relevant both for further research in this field and for the concrete design of AI systems in practice. The project has also laid the foundation for training a new generation of AI developers who can better handle the phenomenon of fairness and justice than before – initial training programmes have already been implemented.

  • Results

    Dropdown Icon

    Three main messages

    1. Managers responsible for deploying AI-based decision systems should minimise fairness violations in algorithmic decisions, as this is key to social acceptance. It is important to involve stakeholders in order to understand their preferred ideal of social justice, and which deviations from the ideal they accept.
    2. Developers of AI should be aware that fairness is complex – but that philosophy can help provide answers. Computer scientists need to gain this understanding, otherwise it is not possible to develop AI systems that implement adequate forms of fairness. Also, fairness should not be treated as a mere technical task. The technically possible fairness metrics should be connected with their moral substance, and developers of AI should choose on the basis of moral requirements rather than technical aspects. They should only implement a fair algorithm technique after analysing what justice requires in the specific context of its application (which is a moral, not a technical question). And transforming such requirements into a concrete implementation requires the integration of concepts from mathematics, decision theory, philosophy, and social sciences.
    3. Computer science educators need to develop new curricula for teaching algorithmic fairness in computer science programmes. Creating socially aligned AI systems cannot be achieved through current computer science teaching. It requires a simultaneous understanding of technology and its implications for social justice – understanding the latter requires a grasp of political philosophy. They should also make sure that these curricula do not treat fairness as a mere technical task.
  • Original title

    Dropdown Icon

    Socially acceptable AI and fairness trade-offs in predictive analytics