Who do I trust? The human or the machine?
What is the role of artificial intelligence (AI) as a teammate?
If an intensive care unit team delegates responsibility for regulating an AI-based respirator, then doctors and nursing staff are to some extent relinquishing control, says Nadine Bienefeld. "That is a recipe for vulnerability. And it also raises the question of how much people trust machines." As part of a preliminary project under NRP 77, Nadine Bienefeld and her team are investigating this relationship of trust. 70 interviews were conducted with carers, junior doctors and senior consultants, and 35 shifts totalling 256 hours were observed.
The result was that participants perceived AI in quite different ways: as a hero that has everything under control; as a personal threat that made them feel they were at the machine's mercy; as a manageable threat along the lines of "I know what can go wrong, but I have a handle on things if they do"; and ultimately as a partner and fully fledged member of the team.
According to Bienefeld, research has tended to neglect the final role cited. “To date, any interactivity has been overly interpreted as bilateral human-machine interaction,” points out Bienefeld, “while it is in fact a human-human-machine triangle." She stresses that the way the group reacts as a whole and among itself is very important. That is why it is not enough simply to train humans how to handle smart medical technology. Training must be seen as a team process and configured accordingly.
This realisation is the basis of Bienefeld's NRP 77 project. Researchers observe medical teams engaged in simulation training for intensive care. AI and simulated incidents also produce some surprises. The objective is to glean recommendations from these observations, to thereby improve the design of AI as a teammate and hence develop stratagems for strengthening trust within human-AI teams in general.