What does it mean to have human control over intelligent machines?
(Autonomous) artificial intelligence (AI) systems are widely expected to be subject to some form of human control. The project investigated whether this is feasible when humans and AI collaborate under time and resource pressure.
Project description (completed research project)
Lab experiments required human test persons to land an object safely in collaboration with an AI. Various forms of collaboration were examined, for example, human intervention in an AI-driven landing process. The collaborative landing processes were monitored and the operators were interviewed after completing their task. The results of these experiments led to the formulation of conceptual and legal considerations concerning human-AI collaboration, as well as policy recommendations regarding the regulation of certain types of AI.
Background
Legal measures increasingly include provisions requiring AI to be subject to some form of human control. A recent example is the EU’s AI Regulation, which entered into force in summer 2024. Its Article 14 requires certain high-risk AIs to be subject to human oversight, particularly if they are used in the security domain. Other normative measures, for example, in international humanitarian law, point to a trend towards similar approaches. It is important to ground these mounting legal expectations in technical realities. Otherwise, human operators tasked with controlling AI might be scapegoated whenever there are human casualties.
Aim
The project aimed to verify forms of control over AI by empirical means. Drawing on behavioural economics, applied practical philosophy, and law, the project sought to go beyond the usual, argumentative papers in the field to propose concrete and actionable forms of control – broady defined – as well as the norms that go along with it.
Relevance
Our results indicate that caution is warranted when subjecting an AI to human control or oversight, both technically and legally. Humans can worsen outputs, e.g. when they intervene unnecessarily or at the wrong moment. Humans could become scapegoats, bearing legal responsibility without having had a fair chance to actually control an AI. This finding applies to situations where the human operator is under time and resource pressure and the situation is thus “hot”. It needs to be investigated whether “cooling down” the situation, e.g. slowing down processes or increasing resources, may serve as a remedy. Further research is also needed on whether human-AI collaboration is less problematic when there is no pressure (“cold collaboration”).
Results
Three main messages
- Control and oversight duties for intelligent systems should be adapted to fit the specific context in which they are implemented. Positing a general duty to control or oversee such systems reflects the nature of the problem more than it offers a genuine solution.
- Human intervention in AI-based processes is no panacea.
- The explainability of high-performance AI systems should not be taken for granted.
Original title
Meaningful Human Control of Security Systems – Aligning International Humanitarian Law with Human Psychology