This phenomenon is known as algorithm aversion, and is often attributed to an inherent mistrust in machines. However, systematically overriding an algorithm may not necessarily stem from algorithm aversion. This new research shows that the very context in which a human decision maker works can also prevent the decision maker from learning whether a machine produces better decisions.
These findings come from research by Francis de Véricourt and Huseyin Gurkan, both professors of management science at ESMT Berlin. The researchers wanted to determine under which conditions a human decision maker, supervising a machine making critical decisions, could properly assess whether the machine produces better recommendations. To do so, the researchers set up an analytical model where a human decision maker supervised a machine tasked with important decisions, such as whether to perform a biopsy on a patient. The human decision maker then made the best choice based on the information they received from the machine for each task.
The researchers found that if a human decision maker heeded the machine’s recommendation and it proved correct, the human would trust the machine more. But the human sometimes did not observe whether the machine’s recommendation was correct – this happened, for instance, when the human decision maker decided not to take follow-up actions. In this case, there was no change in trust and no lessons learned for the human decision maker. This interaction between the human’s decision and the human’s assessment of the machine creates biased learning. Hence, over time, they might not learn how to effectively use machines.
These findings clearly show that it is not always an inherent mistrust against the machines that means humans override algorithmic decisions, but over time, this biased learning can be reinforced by consistent overriding, which might result in incorrectly and ineffectively using machines in decision making.
“Often, we see a tendency for humans to override algorithms, which can be typically attributed to an intrinsic mistrust of machine-based predictions,” says Prof. de Véricourt. “This bias, however, may not be the sole reason for inappropriately and systematically overriding an algorithm. It may also be the case that we are simply not learning how to effectively use machines correctly, when our learning is based solely on the correctness of the machine’s predictions.”
These findings show that trust in a machine’s decision making ability is key to ensuring that we effectively learn how to utilize them, and that the accuracy of their usage also improves.
“Our research shows that there is clearly lack of opportunities for human decision makers to learn from a machine’s intelligence unless they account for its advice continually,” says Prof. Gurkan. “We need to adopt ways of complete learning with the machines constantly, not just selectively.”
The researchers say that these findings shed light on the importance of collaboration between humans and machines and guides us on when (and when not) to trust machines. By studying such situations, we can learn when it is best to listen to the machine and when it is better to make our own decisions. The framework set out by the researchers can help humans to better leverage machines in decision making.
The whole paper may be found here.