From tool to team member – how AI is changing teamwork in healthcare

How is AI changing work tasks, roles and teamwork? Using the healthcare sector as an example, one NRP 77 research project has shown how risks can be minimised, team processes optimised and job satisfaction increased.

Artificial intelligence (AI) is not only changing the way we work, but also the way we work together in teams. But how can AI be established as a team member without undermining human strengths or increasing workloads? A research team led by Nadine Bienefeld (ETH Zurich) conducted six studies involving over 1,100 people in the healthcare sector – nurses, doctors, data scientists, AI developers and medical students – to investigate the conditions under which AI can be meaningfully integrated as a team member. The study focussed on intensive care units in order to better understand the opportunities and risks of collaboration between humans and AI in a particularly complex environment.

The most important findings

The integration of AI not only influences human-machine collaboration, but also changes interactions among staff. For example, integrating knowledge through AI can promote the development of new hypotheses and the willingness to express concerns (‘speaking up’). In contrast, the exchange of information with human colleagues tended to result in fewer new ideas being developed, presumably due to social factors such as groupthink or existing hierarchies. AI can therefore act as a kind of social catalyst that disrupts established thought patterns.

Just as revealing was the massive difference in perceptions of ‘explainability’ between AI developers and medical professionals. While developers assumed that users wanted to understand how the model works (model interpretability), medical professionals actually only needed the AI results to be comprehensibly embedded in the patient's clinical context (clinical plausibility). Technical details were not helpful for them in everyday clinical practice.

In addition, the common narrative that clinicians prefer less automation for fear of losing control was disproved. The opposite is true: for routine tasks such as monitoring patient data, healthcare professionals were even in favour of a higher degree of automation than data scientists, since a semi-automated solution that requires constant human control is seen as inefficient and impractical.

Relevance for policy and practice

For the successful introduction of AI-supported automation and decision support systems in high-risk industries such as medicine, it is imperative to consider the impact on team dynamics, leadership and workplace organisation. Specifically, there is a need for new approaches to management and adapted decision-making guidelines in addition to investments in training and continuing education. Existing and future professionals need to be trained not only in the use of technology, but above all in a new form of ‘teaming intelligence’. This includes the ability to critically question AI-generated information, contextualise it using their own knowledge and actively help shape the new team dynamics. Only if managers and employees are specifically prepared for human-AI teams can this form of collaboration realise its potential added value.

Three main messages

  1. With good work design, AI can improve jobs: AI has the potential to increase job satisfaction, motivation and well-being among healthcare professionals and alleviate skills shortages. However, this is only possible if new tasks, roles and collaboration practices are designed based on the complementary strengths and weaknesses of humans and AI. Failure to consider optimal forms of human-AI teaming poses significant safety and performance risks. These include overreliance, complacency, loss of situational knowledge and inability to control AI outputs.
  2. AI changes the way we communicate and solve problems in teams: Human-AI team collaboration can influence not only how team members interact with the AI but also with each other. The project’s results show that while accessing information from human team members was negatively associated with both developing new hypotheses and speaking up, integrating AI into the team’s knowledge base helped generate new ideas, critically evaluate information, and foster speaking up. The presence of an AI system can thus be used to disrupt negative team dynamics such as confirmation bias and groupthink, encouraging team members to consider alternative perspectives and voice their concerns more freely.
  3. Effective human-AI teaming requires new skills that go beyond technological aspects. Medical and nursing education programmes, as well as professional training, should focus on how to successfully “team up” with AI, i.e. how to build trust, communicate and make joint decisions effectively. Leaders also need to learn how to manage these new forms of human-AI collaborations. Furthermore, preliminary findings indicate that using for example ChatGPT as a “learning coach” may have the potential to support medical students in the diagnostic decision-making process and improve the quality of clinical decision-making.

You can learn more about the researchers’ methodology and background to the project on the NRP 77 project website:

Further NRP 77 research projects on the topic of “Digital Transformation” can be found here: