Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet, recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stake decisions can properly assess whether the machine produces better recommendations. To that end, we study a set-up in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM’s supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine’s prescriptions across tasks, she updates her belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine’s correctness and updates her belief accordingly only if she ultimately decides to act on the task. In this set-up, we characterize the evolution of the DM’s belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better, i.e., she never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human-machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.
The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may even-tually replace humans in certain tasks. In fact, when used in collaboration, machines can significantly enhancethe complementary strengths of humans. Indeed, because of their immense computing power, machines canperform specific tasks with incredible accuracy. In contrast, human decision-makers (DM) are flexible andadaptive but constrained by their limited cognitive capacity. This paper investigates how machine-basedpredictions may affect the decision process and outcomes of a human DM. We study the impact of thesepredictions on decision accuracy, the propensity and nature of decision errors as well as the DM’s cognitiveefforts. To account for both flexibility and limited cognitive capacity, we model the human decision-makingprocess in a rational inattention framework. In this setup, the machine provides the DM with accurate butsometimes incomplete information at no cognitive cost. We fully characterize the impact of machine inputon the human decision process in this framework. We show that machine input always improves the overallaccuracy of human decisions, but may nonetheless increase the propensity of certain types of errors (such asfalse positives). The machine can also induce the human to exert more cognitive efforts, even though its inputis highly accurate. Interestingly, this happens when the DM is most cognitively constrained, for instance,because of time pressure or multitasking. Synthesizing these results, we pinpoint the decision environmentsin which human-machine collaboration is likely to be most beneficial. Our main insights hold for differentinformation and reward structures, and when the DM mistrust the machine.