Skip to main content
Leadership development July 21, 2021

To win with AI, focus on our humanity

White cyborg finger about to touch human finger 3D rendering
Each year, more than 700,000 people around the world die from infections that antibiotics once cured but no longer do.

The bacteria have developed resistance and the number of deaths is on track to hit ten million a year, making the tragedy of Covid-19 pale by comparison. Scientists have tried for decades to find a solution without much success. All this changed last year.

A team of scientists from MIT trained a machine-learning algorithm on more than 2,300 compounds with antimicrobial properties, which they then applied to thousands of molecules to predict which might work. And one molecule stood out. They named it “halicin” (after HAL, the renegade computer in 2001: A Space Odyssey). Halicin was a surprising antibiotic, one that no human had imagined. Not only did it kill 35 superbugs, but it did it by preventing bacteria from storing energy, a mechanism never seen in antibiotics.

Technology disruptions are not novel and have enabled humanity to evolve and progress. This one is different, though. For the first time in the history of our species, technology seems to take over the very domains where we thought humans excelled: innovation, strategic planning, and coordination. So troubling is the situation that some want to rely exclusively on data and algorithms to solve our myriad problems and make day-to-day decisions.

But this is wrong. We humans have a unique cognitive ability that no machine can match – we think in mental models and can frame or even reframe problems. Mental models are representations that make the world comprehensible through the lens of cause and effect. And this cognitive ability allows us to generalize and make abstractions that apply to other situations. Machines, however, notoriously struggle with causality and can’t generalize far beyond the data on which they are trained. That’s precisely why they require a lot of data to work, and why business executives must continue to provide the frame in which AI can perform.  

In our book “Framers: Human Advantage in an Age of Technology and Turmoil,” Kenneth Cukier (The Economist), Viktor Mayer-Schönberger (Oxford Internet Institute), and I look at the human minds behind the world’s AI successes. We assert that sensational headlines – like the Financial Times headline “AI Discovers Antibiotics to Treat Drug-Resistant Diseases” on the halicin case – miss the real story. Namely, that these achievements are not the victories of artificial intelligence but a success of human framing: the ability to rise up to a critical challenge by conceiving of it in a certain way, altering aspects of it, and opening up new paths to a solution. Credit does not go to a new technology but to the unique human ability to think in mental models.

In 2019 OpenAI, a research organization in San Francisco, stunned the gaming world by building a five-bot system that crushed the best human players of Dota 2, a multiplayer online battle arena video game. On the surface, it seemed that the system could divine causation, generalize from experience, and, with those abstractions, apply causal frames to new circumstances. But a closer look reveals human mental models under the hood. Through the trial and error of plays repeated millions of times, it identifies the best sets of actions and gives itself a statistical “reward” to reinforce the behavior. Yet in its most critical areas, what constituted a reward wasn’t learned by the system itself: it needed to be coded manually. The AI system performed well, but people had to peck at a keyboard to input the right causal frames for it to work.

Leaders are increasingly turning to AI to help their organizations perform well on tasks that were previously considered the sole domain of humans. After all, AI seems to create novel solutions that have long eluded us. But this is too great a generalization. Beyond AI, there is always a unique human cognitive capacity – an ability to think with mental models. And without fostering this ability to frame problems, organizations will not properly leverage the power of AI.

So how can executives leverage the power of framing?

Consider what could be, not what is. Mental models let us imagine alternatives in a way AI can’t. This counterfactual thinking is an essential precursor for action, an element of our preparation to make decisions. Successful leaders often develop a culture where people ask what-if questions and encourage them to envision what does not exist, to understand the world, and conceive of how things might be different. These imaginings need not be meaningless daydreams; the right mental models help us to adjust our imagination so that our counterfactuals remain actionable, showing us actions that are actually possible. With his SpaceX mission, Elon Musk is imagining alternative realities on Mars. But these counterfactuals are actionable: Where there was formerly reliance on wings for landings, SpaceX removed the wings and pioneered reusable rockets.

Select the right frame. Learning which mental models to apply to which situations is another crucial element to becoming better framers. How we frame a problem determines what we see, and what we see determines how we act. Take for example the work of the World Health Organization and Doctors Without Borders (Médicins Sans Frontières, MSF, an international aid group) during the 2014 Ebola outbreak in West Africa. Both organizations were working on a response using the same data, but they frame the problem differently. As a result, WHO advised for a limited response, while MSF predicted an epidemic of a magnitude never seen before. WHO won the argument, but the MSF prediction came to be true – the local outbreak turned into a global pandemic. Same data, different frames, and opposite conclusions.

Expand your reach. We do not always have our own mental models to draw from for every situation. We sometimes need to tap into new reservoirs of approaches – what Charlie Munger, the business partner of famed investor Warren Buffett, refers to as a “latticework of mental models.” The business world doesn’t have a good track record of nurturing outsider perspectives. In Framers, we draw the reader back to the Greek myth of Cassandra, who was both gifted by Apollo with the power of prophesy and cursed to never have her visions believed. When Cassandra warned the city of Troy it would fall, people considered her mad and did not pay attention. The story does not end well – Troy is indeed sacked. But Ed Catmull, the cofounder and president of the animated-movie studio Pixar, interprets the myth differently. “Why, I always wonder, do we think of Cassandra as the one who’s cursed?” he asks. “The real curse, it seems to me, afflicts everyone else—all of those who are unable to perceive the truth she speaks.” If they want their organizations to develop the dexterity to entertain many different models, leaders need to create room for their “corporate Cassandras.”

Ultimately, an AI system cannot conceive of anything. It cannot concoct mental models nor think in counterfactuals. It can neither generalize nor explain. For that, it relies on our ability to frame problems, to think causally and in counterfactuals and with constraints. In fact, we can only leverage AI systems to their fullest potential by becoming better at what we’re already good at: being framers.

This article was originally published by Forbes on June 29, 2021, and republished with permission. 

Add new comment