Subject(s)
Information technology and systems; Management sciences, decision sciences and quantitative methods; Technology, R&D management
Keyword(s)
machine accuracy, decision making, human-in-the-loop, algorithm aversion, dynamic learning
Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet, recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stake decisions can properly assess whether the machine produces better recommendations. To that end, we study a set-up in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM’s supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine’s prescriptions across tasks, she updates her belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine’s correctness and updates her belief accordingly only if she ultimately decides to act on the task. In this set-up, we characterize the evolution of the DM’s belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better, i.e., she never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human-machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.
View all ESMT Working Papers in the ESMT Working Paper Series here. ESMT Working Papers are also available via SSRN, RePEc, EconStor, and the German National Library (DNB).
Pages
54
ISSN (Print)
1866–3494
Subject(s)
Management sciences, decision sciences and quantitative methods; Product and operations management; Technology, R&D management
Keyword(s)
Data, machine learning, data product, pricing, incentives, contracting
This paper explores how firms that lack expertise in machine learning (ML) can leverage the so-called AI Flywheel effect. This effect designates a virtuous cycle by which, as an ML product is adopted and new user data are fed back to the algorithm, the product improves, enabling further adoptions. However, managing this feedback loop is difficult, especially when the algorithm is contracted out. Indeed, the additional data that the AI Flywheel effect generates may change the provider's incentives to improve the algorithm over time. We formalize this problem in a simple two-period moral hazard framework that captures the main dynamics among ML, data acquisition, pricing, and contracting. We find that the firm's decisions crucially depend on how the amount of data on which the machine is trained interacts with the provider's effort. If this effort has a more (less) significant impact on accuracy for larger volumes of data, the firm underprices (overprices) the product. Interestingly, these distortions sometimes improve social welfare, which accounts for the customer surplus and profits of both the firm and provider. Further, the interaction between incentive issues and the positive externalities of the AI Flywheel effect has important implications for the firm's data collection strategy. In particular, the firm can boost its profit by increasing the product's capacity to acquire usage data only up to a certain level. If the product collects too much data per user, the firm's profit may actually decrease, i.e., more data is not necessarily better.
Copyright © 2022, INFORMS
Volume
68
Journal Pages
8791–8808
ISSN (Online)
1526-5501
ISSN (Print)
0025–1909
Subject(s)
Economics, politics and business environment; Strategy and general management; Technology, R&D management
Keyword(s)
Pharmaceuticals, patent, Markush
Markush structures are molecular skeletons containing not only specific atoms but also placeholders to represent broad sets of chemical (sub)structures. As genus claims, they allow a vast number of compounds to be claimed in a patent application without having to specify every single chemical entity. While Markush structures raise important questions regarding the functioning of the patent system, innovation researchers have been surprisingly silent on the topic. This paper summarizes the ongoing policy debate about Markush structures and provides first empirical insights into how Markush structures are used in patent documents in the pharmaceutical industry and how they affect important outcomes in the patent prosecution process. While not causing frictions in the patent prosecution process, patent documents containing Markush structures have an increased likelihood to restrict the patentability of follow-on inventions and to facilitate the construction of broad patent fences.
With permission of Elsevier
Volume
51
Journal Pages
104597
Subject(s)
Economics, politics and business environment
Secondary Title
Thinking like an economist: How efficiency replaced equality in U.S. public policy
Pages
3
Journal Pages
1509 – 1511
Subject(s)
Economics, politics and business environment
Keyword(s)
cybersecurity, governance, Brexit, EU-UK relations, European Union, United Kingdom
The book chapter analyzes the EU-UK Trade and Cooperation Agreement's (TCA) Chapter on future thematic cooperation on cybersecurity. It explains the broader political, technological and regulatory context of cybersecurity cooperation at the international and the EU levels. It then analyzes the TCA's passages individually and within this broader context. Finally, it provides an evaluation and outlook on future EU-UK cooperation on cybersecurity.
Secondary Title
Handels- und Zusammenarbeitsabkommen EU/VK
ISBN
978-3-8487-7188-2
Subject(s)
Management sciences, decision sciences and quantitative methods; Product and operations management; Technology, R&D management
Keyword(s)
machine-learning, rational inattention, human-machine collaboration, cognitive effort
The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may even-tually replace humans in certain tasks. In fact, when used in collaboration, machines can significantly enhancethe complementary strengths of humans. Indeed, because of their immense computing power, machines canperform specific tasks with incredible accuracy. In contrast, human decision-makers (DM) are flexible andadaptive but constrained by their limited cognitive capacity. This paper investigates how machine-basedpredictions may affect the decision process and outcomes of a human DM. We study the impact of thesepredictions on decision accuracy, the propensity and nature of decision errors as well as the DM’s cognitiveefforts. To account for both flexibility and limited cognitive capacity, we model the human decision-makingprocess in a rational inattention framework. In this setup, the machine provides the DM with accurate butsometimes incomplete information at no cognitive cost. We fully characterize the impact of machine inputon the human decision process in this framework. We show that machine input always improves the overallaccuracy of human decisions, but may nonetheless increase the propensity of certain types of errors (such asfalse positives). The machine can also induce the human to exert more cognitive efforts, even though its inputis highly accurate. Interestingly, this happens when the DM is most cognitively constrained, for instance,because of time pressure or multitasking. Synthesizing these results, we pinpoint the decision environmentsin which human-machine collaboration is likely to be most beneficial. Our main insights hold for differentinformation and reward structures, and when the DM mistrust the machine.
View all ESMT Working Papers in the ESMT Working Paper Series here. ESMT Working Papers are also available via SSRN, RePEc, EconStor, and the German National Library (DNB).
Pages
56
ISSN (Print)
1866–3494
Subject(s)
Human resources management/organizational behavior
Keyword(s)
error, error management, failure, psychological safety, organizational learning
ISSN (Print)
0015-6914
Subject(s)
Economics, politics and business environment
Keyword(s)
trade platform, hybrid business model, steering, regulation
JEL Code(s)
D42, L12, L13, L40, H25
We illustrate conditions under which a trade platform selling its own products alongside third-party sellers benefits or harms consumers. This benefits consumers by lowering prices in a suite of models: a gatekeeper platform facing a competitive fringe of sellers, when fringe sellers also have their own channels perfectly or imperfectly substitutable to the platform; when the gatekeeper platform with fringe sellers competes against a big seller with market power on a differentiated alternative channel; and when the gatekeeper platform hosts only a big seller with market power. Platform product entry might harm consumers when a big firm sells both on the platform and on its alternative channel. The platform selling its own products harms consumers when consumers have heterogenous tastes for variants of products and the platform can control the access of fringe sellers via its commission and own product price. We also review the recent literature to highlight other channels via which benefits and harm arise from the platform selling its own products in its marketplace.
With permission of Elsevier
Volume
84
Journal Pages
102861
Subject(s)
Human resources management/organizational behavior
Keyword(s)
human resources management/organizational behavior
JEL Code(s)
M51
Volume
September - October 2022
Subject(s)
Human resources management/organizational behavior
Keyword(s)
corporate culture, remote work, leadership
ISSN (Print)
0015-6914