My new paper has been published in AI & Society.
Here’s the abstract:
The use of cognitive systems like pattern recognition or video tracking technology in security applications is becoming ever more common. The paper considers cases in which the cognitive systems are meant to assist human tasks by providing information, but the final decision is left to the human. All these systems and their various applications have a common feature: an intrinsic difference in how a situation or an event is assessed by a human being and a cognitive system. This difference, which here is named “the model gap,” is analyzed pertaining to its epistemic role and its ethical consequences. The main results are as follows: (1) The model gap is not a problem, which might be solved by future research, but the central feature of cognitive systems. (2) The model gap appears on two levels: the aspects of the world, which are evaluated, and the way they are processed. This leads to changes in central concepts. While differences on the first level often are the very reason for the deployment of cognitive systems, the latter is hard to notice and often goes unreflected. (3) Such a missing reflection is ethically problematic because the human is meant to give the final judgment. It is particularly problematic in security applications where it might lead to a conflation of descriptive and normative concepts. (4) The idea of the human operator having the last word is based on an assumption of independent judgment. This assumption is flawed for two reasons: The cognitive system and the human operators form a “hybrid system” the components of which cannot be assessed independently; and additional modes of judgment might pose new ethical problems.