Fair AI as a success factor: trust depends on perceived fairness
AI can support decision-making, but it will only gain acceptance if it's seen as fair. A research team outlines how companies can design fair AI systems to build trust and foster greater use of AI.
A research project led by Professor Klaus Möller from the University of St. Gallen has shown that artificial intelligence can support decision-making, yet many companies hesitate to use it. This reluctance is largely due to AI’s tendency to replicate data biases and produce discriminatory outcomes.
This is where the research team came in. They analysed the mechanisms that foster trust in AI and developed guidelines on how companies can create fair algorithmic systems.
The most important realisation
Research shows that perceived fairness is crucial for people to be willing to use AI in the first place. Fairness is a core component of trustworthiness, and without at least a minimum level of it, users are unlikely to adopt AI.
Increasing fairness will, therefore, increase the use and uptake of AI.
Significance for policy and practice
According to the research team, instead of restricting AI, policymakers should focus on promoting an ethical approach.
But how can companies and organisations implement fair AI? The researchers developed a range of tools and guidelines for managers and administrators, helping them make informed decisions regarding the use of technologies for automated decision-making (the links to the relevant documents can be found further down in the text).
Head of Research Klaus Möller recommends,
defining boundaries and goals before experimenting with the implementation of AI.
Prof. Klaus MöllerHowever, the fairness of AI relies on more than technology. The human aspect is just as important because implementing AI in companies is doomed to failure if interest groups (e.g. employees) feel their integrity or privacy is threatened.
Three main messages
- Integrating fairness into AI improves its perception among stakeholders.
- Technical measures (fair AI) and social measures (governance, regulation and target transparency) are necessary to improve trust in AI.
- Stakeholders perceive fairness differently depending on whether a human or an AI is making the decisions.
You can find more information on the project, including how researchers carried out their investigation, on the NRP 77 project website: Governance and legal framework for artificial intelligence management
Further research projects on "Digital Transformation" as part of the National Research Programme NRP 77 are available under Projects.