How AI is transforming the interaction between doctors and patients

The research project demonstrates how artificial intelligence is reshaping roles in the consultation room – and why doctors must support their patients not only medically but also through communication.

Artificial Intelligence (AI) is increasingly being used in healthcare – for example, to support diagnoses or risk assessments. But what does it mean for patients and doctors when machines are involved in medical decision-making? The EXPLaiN project, led by Bernice Elger (University of Basel), conducted qualitative interviews with over 60 experts and patients, analysed international guidelines, provided legal assessments and developed ethical recommendations. The focus was on how the relationship between doctors and patients is changing – and what consequences this has.

The most important findings

Research shows that the use of AI is fundamentally changing the distribution of roles in the consultation room. The traditional two-person relationship between doctor and patient is being expanded – with AI as a third player. Many patients perceive the systems as complex and unavoidable. They therefore fear that their ability to have a say in the treatment process will diminish and that doctors will turn to AI instead of them.

Doctors therefore have a ‘narrative responsibility’ when dealing with AI. The way in which they talk about AI – whether as a promise of progress or as a risk – has a significant influence on how patients perceive and judge the technology. In order to provide information about AI that is as unbiased and neutral as possible, the communication of medical professionals becomes an ethical task.

Relevance for policy and practice

For policy and practice, it is obvious that there is a need for clearer guidelines on when patients must be informed about the use of AI – especially when systems not only support but also make independent decisions or share data with third parties.

At the same time, the development of communication skills is key: Doctors must be trained to explain the use of AI in conversations with patients in an understandable way.

There is also a need for action in data governance – for example, through clear responsibilities and standards for the secure use of health data in connection with the use and further development of clinical AI.

Three main messages

  1. Some patients fear that the use of AI will limit their participation in medical decision-making processes. This result highlights the importance of informing patients that their active participation in shared decision-making remains central to AI-supported procedures. Doctors therefore have a new responsibility: They must not only explain medical content, but they also need to be transparent about the use and role of AI in discussions with patients.
  2. Patient consent for the use of AI is not currently required under Swiss law – unless the use goes beyond supporting clinical decision-making.
  3. Sharing patient data with computer scientists can contribute to the development of more accurate tools to support clinical decision-making.

Further details on the methodology and background of the research project can be found on the NRP 77 project website:

Additional research projects on the topic of “Digital Transformation” within the framework of the National Research Programme NRP 77 can be found here: