Realizing the full potential of AI in the future of work will require graduating from an “automation” view to a “design” view of the creation of hybrid human-machine systems.
This article inludes a note by Joe Ucuzoglu, CEO, Deloitte US.
“AI systems will need to be smart and to be good teammates.”—Barbara Grosz1
Artificial intelligence (AI) is one of the signature issues of our time, but also one of the most easily misinterpreted. The prominent computer scientist Andrew Ng’s slogan “AI is the new electricity”2 signals that AI is likely to be an economic blockbuster—a general-purpose technology3 with the potential to reshape business and societal landscapes alike. Ng states:
Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.4
Such provocative statements naturally prompt the question: How will AI technologies change the role of humans in the workplaces of the future?
An implicit assumption shaping many discussions of this topic might be called the “substitution” view: namely, that AI and other technologies will perform a continually expanding set of tasks better and more cheaply than humans, while humans will remain employed to perform those tasks at which machines cannot (yet) excel. This view comports with the economic goal of achieving scalable efficiency.
The seductiveness of this received wisdom was put into sharp relief by this account of a prevailing attitude at the 2019 World Economic Forum in Davos:
People are looking to achieve very big numbers … Earlier, they had incremental, 5 to 10 percent goals in reducing their workforce. Now they’re saying, “Why can’t we do it with 1 percent of the people we have?5
Explore the Future of work collection
This article is featured in Deloitte Review, issue 27
Download the issue
Learn about Deloitte’s services
Go straight to smart. Get the Deloitte Insights app
But as the personal computing pioneer Alan Kay famously remarked, “A change in perspective is worth 80 IQ points.” This is especially true of discussions of the roles of humans and machines in the future of work. Making the most of human and machine capabilities will require moving beyond received wisdom about both the nature of work and the capabilities of real-world AI.
The zero-sum conception of jobs as fixed bundles of tasks, many of which will increasingly be performed by machines, limits one’s ability to reimagine jobs in ways that create new forms of value and meaning.6 And framing AI as a kind of technology that imitates human cognition makes it easy to be misled by exaggerated claims about the ability of machines to replace humans.
We believe that a change in perspective about AI’s role in work is long overdue. Human and machine capabilities are most productively harnessed by designing systems in which humans and machines function collaboratively in ways that complement each other’s strengths and counterbalance each other’s limitations. Following MIT’s Thomas Malone, a pioneer in the study of collective intelligence, we call such hybrid human-machine systems superminds.7
The change in perspective from AI as a human substitute to an enabler of human-machine superminds has fundamental implications for how organizations should best harness AI technologies:
Human and machine capabilities are most productively harnessed by designing systems in which humans and machines function collaboratively in ways that complement each other’s strengths and counterbalance each other’s limitations.
Compared with the economic logic of scalable growth, the superminds view may strike some as Pollyannaish wishful thinking. Yet it is anything but. Two complementary points—one scientific, one societal—are worth keeping in mind.
First, the superminds view is based on a contemporary, rather than decades-old, scientific understanding of the comparative strengths and limitations of human and machine intelligence. In contrast, much AI-related thought leadership in the business press has arguably been influenced by an understanding of AI rooted in the scientific zeitgeist of the 1950s and the subsequent decades of science-fiction movies that it inspired.9
Second, the post-COVID world is likely to see increasing calls for new social contracts and institutional arrangements of the sort articulated by the Business Roundtable in August 2019.10 In addition to being more scientifically grounded, a human-centered approach to AI in the future of work will better comport with the societal realities of the post-COVID world. A recent essay by New America chief executive Anne-Marie Slaughter conveys today’s moment of opportunity:
The coronavirus, and its economic and social fallout, is a time machine to the future. Changes that many of us predicted would happen over decades are instead taking place in the span of weeks. The future of work is here [and it’s] an opportunity to make the changes we knew we were going to have to make eventually.11
To start, let us ground the discussion in the relevant lessons of both computer and cognitive science.
The view that AI will eventually be able to replace people reflects the aspiration—explicitly articulated by the field’s founders in the 1950s—to implement human cognition in machine form.12 Since then, it has become common for major AI milestones to be framed as machine intelligence taking another step on a path to achieving full human intelligence. For example, the chess grandmaster Garry Kasparov’s defeat by IBM’s Deep Blue computer was popularly discussed as “the brain’s last stand.”13 In the midst of his defeat by IBM Watson, the Jeopardy quiz show champion Ken Jennings joked, “I for one welcome my new computer overlords.”14 More recently, a Financial Times profile of DeepMind CEO Demis Hassabis, published shortly after AlphaGo’s defeat of Go champion Lee Sedol, stated: “At DeepMind, engineers have created programs based on neural networks, modeled on the human brain … The intelligence is general, not specific. This AI ‘thinks’ like humans do.”15
But the truth is considerably more prosaic than this decades-old narrative suggests. It is indeed true that powerful machine learning techniques such as deep learning neural networks and reinforcement learning are inspired by brain and cognitive science. But it does not follow that the resulting AI technologies understand or think in humanlike ways.
So-called “second wave” AI applications essentially result from large-scale statistical inference on massive data sets. This makes them powerful—and often economically game-changing—tools for performing narrow tasks in sufficiently controlled environments. But such AIs possess no common sense, conceptual understanding, awareness of other minds, notions of cause and effect, or intuitive understanding of physics.
What’s more, and even more crucially, these AI applications are reliable and trustworthy only to the extent that they are trained on data that adequately represents the scenarios in which they are to be deployed. If the data is insufficient or the world has changed in relevant ways, the technology cannot necessarily be trusted. For example, a machine translation algorithm would need to be exposed to many human-translated examples of a new bit of slang to hopefully get it right.16 Similarly, a facial recognition algorithm trained only on images of light-skinned faces might fail to recognize dark-skinned individuals at all.17
In contrast, human intelligence is characterized by the ability to learn concepts from few examples, enabling them to function in unfamiliar or rapidly changing environments—essentially the opposite of brute-force pattern recognition learned from massive volumes of (human-)curated data. Think of the human ability to rapidly learn new slang words, work in physical environments that aren’t standardized, or navigate cars through unfamiliar surroundings. Even more telling is a toddler’s ability to learn language from a relative handful of examples.18 In each case, human intelligence succeeds where today’s “second wave” AI fails because it relies on concepts, hypothesis formation, and causal understanding rather than pattern-matching against massive historical data sets.
It is therefore best to view AI technologies as focused, narrow applications that do not possess the flexibility of human thought. Such technologies will increasingly yield economic efficiencies, business innovations, and improved lives. Yet the old idea that “general” AI would mimic human cognition has, in practice, given way to today’s multitude of practical, narrow AIs that operate very differently from the human mind. Their ability to generally replace human workers is far from clear.
A key theme that has emerged from decades of work in AI and cognitive science serves as a useful touchstone for evaluating the relative strengths and limitations of human and computer capabilities in various future of work scenarios. This theme is known as “the AI paradox.”19
It is hardly news that it is often comparatively easy to automate a multitude of tasks that humans find difficult, such as memorizing facts and recalling information, accurately and consistently weighing risk factors, rapidly performing repetitive tasks, proving theorems, performing statistical procedures, or playing chess and Go. What’s seemingly paradoxical is that the inverse also holds true: Things that come naturally to most people—using common sense, understanding context, navigating unfamiliar landscapes, manipulating objects in uncontrolled environments, picking up slang, understanding human sentiment and emotions—are often the hardest to implement in machines.
The renowned Berkeley cognitive scientist Alison Gopnik states, “It turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby.”20 The Harvard cognitive scientist Steven Pinker comments that the main lesson from decades of AI research is that “Difficult problems are easy, and the easy problems are difficult.”21
Difficult problems are easy, and the easy problems are difficult.
Far from being substitutes for each other, human and machine intelligence therefore turn out to be fundamentally complementary in nature. This basic observation turns the substitution view of AI on its head. In organizational psychology, what Scott Page calls a “diversity bonus” results from forming teams composed of different kinds of thinkers. Heterogeneous teams outperform homogenous ones at solving problems, making predictions, and innovating solutions.22 The heterogeneity of human and machine intelligences motivates the search for “diversity bonuses” resulting from well-designed teams of human and machine collaborators.
A “twist ending” to an AI breakthrough typically used to illustrate the substitution view—Deep Blue’s defeat of the chess grandmaster Garry Kasparov—vividly illustrates the largely untapped potential of the human-machine superminds approach. After his defeat, Kasparov helped create a new game called “advanced chess” in which teams of humans using computer chess programs competed against other such teams. In 2005, a global advanced chess tournament called “freestyle chess” attracted grandmaster players using some of the most powerful computers of the time. The competition ended in an upset victory: Two amateur chess players using three ordinary laptops, each running a different chess program, beat their grandmaster opponents using supercomputers.
Writing in 2010, Kasparov commented that the winners’ “skill at manipulating and ‘coaching’ their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.” He went on to state what has come to be known as “Kasparov’s Law”:
Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process … Human strategic guidance combined with the tactical acuity of a computer was overwhelming.23
In Thomas Malone’s vernacular, the system of two human players and three computer chess programs formed a human-computer collective intelligence—a supermind—that proved more powerful than competing group intelligences boasting stronger human and machine components, but inferior supermind design.
Though widespread, such phenomena are often hidden in plain sight and obscured by the substitution view of AI. Nonetheless, the evidence is steadily gathering that smart technologies are most effective and trustworthy when deployed in the context of well-designed systems of human-machine collaboration.
We illustrate different modes of collaboration24—and the various types of superminds that result—though a sequence of case studies below.
Call center operators handle billions of customer requests per year—changing flights, refunding purchases, reviewing insurance claims, and so on. To handle the flood of queries, organizations commonly implement chatbots to handle simple queries and escalate more complex ones to human agents.25 A common refrain, echoing the substitution view, is that human call center operators remain employed to handle tasks beyond the capabilities of today’s chatbots, but that these jobs will increasingly go by the wayside as chatbots become more sophisticated.
While we do not hazard a prediction of what will happen, we believe that call centers offer an excellent example of the surplus value, as well as more intrinsically meaningful work, that can be enabled by the superminds approach. In this approach, chatbots and other AI tools function as assistants to humans who increasingly function as problem-solvers. Chatbots offer uniformity and speed while handling massive volumes of routine queries (“Is my flight on time?”) without getting sick, tired, or burned out. In contrast, humans possess the common sense, humor, empathy, and contextual awareness needed to handle lower volumes of less routine or more open-ended tasks at which machines flounder (“My flight was canceled and I’m desperate. What do I do now?”). In addition, algorithms can further assist human agents by summarizing previous interactions, suggesting potential solutions, or identifying otherwise hidden customer needs.
This logic has recently been employed by a major health care provider to better deal with the COVID crisis. A chatbot presents patients with a sequence of questions from the US Centers for Disease Control and Prevention and in-house experts. The AI bot alleviates high volumes of hotline traffic, thereby enabling stretched health care workers to better focus on the most pressing cases.26
If this is done well, customers can benefit from more efficient, personalized service, while call center operators have the opportunity to perform less repetitive, more meaningful work involving problem-solving, engaging with the customer, and surfacing new opportunities. In contrast, relying excessively on virtual agents that are devoid of common sense, contextual awareness, genuine empathy, or the ability to handle unexpected situations (consider the massive number of unexpected situations created by the COVID crisis) poses the risk of alienating customers.
Even if one grants the desirability of this “superminds” scenario, however, will AI technologies not inevitably decrease the number of such human jobs? Perhaps surprisingly, this is not a foregone conclusion. To illustrate, recall what happened to the demand for bank tellers after the introduction of automated teller machines (ATMs). Intuitively, one might think that ATMs dramatically reduced the need for human tellers. But the demand for tellers in fact increased after the introduction of ATMs: The technology made it economical for banks to open numerous smaller branches, each staffed with human tellers operating in more high-value customer service, less transactional roles.27 Analogously, a recent Bloomberg report told of a company that hired more call center operators to handle the increased volume of complex customer queries after its sales went up thanks to the introduction of chatbots.28
A further point is that the introduction of new technologies can give rise to entirely new job categories. In the case of call centers, chatbot designers write and continually revise the scripts that the chatbots use to handle routine customer interactions.29
Characteristically human skills can become more valuable when the introduction of a technology increases the number of nonautomatable tasks.
This is not to minimize the threat of technological unemployment in a field that employs millions of people. We point out only that using technology to automate simple tasks need not inevitably lead to unemployment. As the ATM story illustrates, characteristically human skills can become more valuable when the introduction of a technology increases the number of nonautomatable tasks.
Radiology is another field commonly assumed to be threatened by technological unemployment. Much of radiology involves interpreting medical images—a task at which deep learning algorithms excel. It is therefore natural to anticipate that much of the work currently done by radiologists will be displaced.30 In a 2017 tweet publicizing a recent paper, Andrew Ng asked, “Should radiologists be worried about their jobs? Breaking news: We can now diagnose pneumonia from chest X-rays better than radiologists.”31 A year earlier, the deep learning pioneer Geoffrey Hinton declared that it’s “quite obvious that we should stop training radiologists.”32
But further reflection reveals a “superminds” logic strikingly analogous to the scenario just discussed in the very different realm of call centers. In his recent book Deep Medicine, Eric Topol quotes a number of experts who discuss radiology algorithms as assistants to expert radiologists.33 The Penn Medicine radiology professor Nick Bryan predicts that “within 10 years, no medical imaging study will be reviewed by a radiologist until it has been pre-analyzed by a machine.” Writing with Michael Recht, Bryan states that:
We believe that machine learning and AI will enhance both the value and the professional satisfaction of radiologists by allowing us to spend more time performing functions that add value and influence patient care, and less time doing rote tasks that we neither enjoy nor perform as well as machines.34
The deep learning pioneer Yann LeCun articulates a consistent idea, stating that algorithms can automate simple cases and enable radiologists to avoid errors that arise from boredom, inattention, or fatigue. Unlike Ng and Hinton, LeCun does not anticipate a reduction in the demand for radiologists.35
Using AI to automate voluminous and error-prone tasks so that doctors can spend more time providing personalized, high-value care to patients is the central theme of Topol’s book. In the specific case of radiologists, Topol anticipates that these value-adding tasks will include explaining probabilistic outputs of algorithms both to patients and to other medical professionals. For Topol, the “renaissance radiologists” of the future will act less as technicians and more as “real doctors” (Topol’s phrase), and also serve as “master explainers” who display the solid grasp of data science and statistical thinking needed to effectively communicate risks and results to patients.
This value-adding scenario, closely analogous to the chatbot and ATM scenarios, involves the deployment of algorithms as physician assistants. But other human-machine arrangements are possible. A recent study combined human and algorithmic diagnoses using a “swarm” tool that mimics the collective intelligence of animals such as honeybees in a swarm. (Previous studies have suggested that honeybee swarms make decisions through a process that is similar to neurological brains.36) The investigators found that the hybrid human-machine system—which teamed 13 radiologists with two deep learning AI algorithms—outperformed both the radiologists and the AIs making diagnoses in isolation. Paraphrasing Kasparov’s law, humans + machines + a better process of working together (the swarm intelligence tool) outperforms the inferior process of either humans or machines working alone.37
Using the mechanism of swarm intelligence to create a human-machine collective intelligence possesses the thought-provoking appeal of good science fiction. But more straightforward forms of human-machine partnerships for making better judgments and decisions have been around for decades—and will become increasingly important in the future. The AI pioneer and proto-behavioral economist Herbert Simon wrote that “decision-making is the heart of administration.”38 Understanding the future of work therefore requires understanding the future of decisions.
Algorithms are increasingly used to improve economically or societally weighty decisions in such domains as hiring, lending, insurance underwriting, jurisprudence, and public sector casework. Similar to the widespread suggestion of algorithms threatening to put radiologists out of work, the use of algorithms to improve expert decision-making is often framed as an application of machine learning to automate decisions.
In fact, the use of data to improve decisions has as much to do with human psychology and ethics as it does statistics and computer science. Once again, it pays to remember the AI paradox and consider the relative strengths and weaknesses of human and machine intelligence.
The systematic shortcomings of human decision-making—and corresponding relative strengths in algorithmic prediction—have been much discussed in recent years thanks to the pioneering work of Simon’s behavioral science successors Daniel Kahneman and Amos Tversky. Two major sorts of errors plague human decisions:
Regarding noise, algorithms have a clear advantage. Unlike humans, algorithms can make limitless predictions or recommendations without getting tired or distracted by unrelated factors. Indeed, Kahneman—who is currently writing a book about noise—suggests that noise might be a more serious culprit than bias in causing decision traps, and views this as a major argument in favor of algorithmic decision-making.40
Bias is the more subtle issue. It is well known that training predictive algorithms on data sets that reflect human or societal biases can encode, and potentially amplify, those biases. For example, using historical data to build an algorithm to predict who should be made a job offer might well be biased against females or minorities if past decisions reflected such biases.41 Analogously, an algorithm used to target health care “super-utilizers” in order to offer preventative concierge health services might be biased against minorities who have historically lacked access to health care.42
As a result, the topic of machine predictions and human decisions is often implicitly framed as a debate between AI boosters arguing for the superiority of algorithmic to human intelligence on the one side, and AI skeptics warning of “weapons of math destruction” on the other. Adopting a superminds rather than a substitution approach can help people move beyond such unproductive debates.
One of us (Jim Guszcza) has learned from firsthand experience how predictive algorithms can be productively used as inputs into, not replacements for, human decisions. Many years ago, Deloitte’s Data Science practice pioneered the application of predictive algorithms to help insurance underwriters better select business insurance risks (for example, workers’ compensation or commercial general liability insurance) and price the necessary contracts.
Crucially, the predictive algorithms were designed to meet the end-user underwriters halfway, and the underwriters were also properly trained so that they could meet the algorithms halfway. Black-box machine learning models were typically used only as interim data exploration tools or benchmarks for the more interpretable and easily documented linear models that were usually put into production. Furthermore, algorithmic outputs were complemented with natural language messages designed to explain to the end user “why” the algorithmic prediction was what it was for a specific case.43 These are all aspects of what might be called a “human-centered design” approach to AI.44
In addition, the end users were given clear training to help them understand when to trust a machine prediction, when to complement it with other information, and when to ignore it altogether. After all, an algorithm can only weigh the inputs presented to it. It cannot judge the accuracy or completeness of those inputs in any specific case. Nor can it use common sense to evaluate context and judge how, or if, the prediction should inform the ultimate decision.
Such considerations, often buried by discussions that emphasize big data and the latest machine learning methods, become all the more pressing in the wake of such world-altering events as the COVID crisis.45 In such times, human judgment is more important than ever to assess the adequacy of algorithms trained on historical data that might be unrepresentative of the future. Recall that, unlike humans, algorithms possess neither the common sense nor the conceptual understanding needed to handle unfamiliar environments, edge cases, ethical considerations, or changing situations.
Another point is ethical in nature. Most people simply would not want to see decisions in certain domains—such as hiring, university admissions, public sector caseworker decisions, or judicial decisions—meted out by machines incapable of judgment. Yet at the same time, electing not to use algorithms in such scenarios also has ethical implications. Unlike human decisions, machine predictions are consistent over time, and the statistical assumptions and ethical judgments made in algorithm design can be clearly documented. Machine predictions can therefore be systematically audited, debated, and improved in ways that human decisions cannot.46
Indeed, the distinguished behavioral economist Sendhil Mullainathan points out that the applications in which people worry most about algorithmic bias are also the very situations in which algorithms—if properly constructed, implemented, and audited—also have the greatest potential to reduce the effects of implicit human biases.47
The above account provides a way of understanding the increasingly popular “human-centered AI” tagline: Algorithms are designed not to replace people but rather to extend their capabilities. Just as eyeglasses help myopic eyes see better, algorithms can be designed to help biased and bounded human minds make better judgments and decisions. This is achieved through a blend of statistics and human-centered design. The goal is not merely to optimize an algorithm in a technical statistical sense, but rather to optimize (in a broader sense) a system of humans working with algorithms.48 In Malone’s vernacular, this is “supermind design thinking.”
New America’s Anne-Marie Slaughter comments:
Many of the jobs of the future should also be in caregiving, broadly defined to include not only the physical care of the very old and very young, but also education, coaching, mentoring, and advising. [The COVID] crisis is a reminder of just how indispensable these workers are.49
In a well-known essay about health coaches, the prominent medical researcher and author Atul Gawande provides an illuminating example of Slaughter’s point. Gawande describes the impact of a health coach (Jayshree) working with a patient (Vibha) with multiple serious comorbidities and a poor track record of improving her diet, exercise, and medical compliance behaviors:
I didn’t think I would live this long,” Vibha said through [her husband] Bharat, who translated her Gujarati for me. “I didn’t want to live.” I asked her what had made her better. The couple credited exercise, dietary changes, medication adjustments, and strict monitoring of her diabetes. But surely she had been encouraged to do these things after her first two heart attacks. What made the difference this time? “Jayshree,” Vibha said, naming the health coach from Dunkin’ Donuts, who also speaks Gujarati. “Jayshree pushes her, and she listens to her only and not to me,” Bharat said. “Why do you listen to Jayshree?” I asked Vibha. “Because she talks like my mother,” she said.50
The skills of caregivers such as Jayshree are at the opposite end of the pay and education spectra from such fields as radiology. And the AI paradox suggests that such skills are unlikely to be implemented in machine form anytime soon.
Even so, AI can perhaps play a role in composing purely human superminds such as the one Gawande describes. In Gawande’s example, the value wasn’t created by generally “human” contact, but rather by the sympathetic engagement of a specific human—in this case one, with a similar language and cultural background. AI algorithms have long been used to match friends and romantic partners based on cultural and attitudinal similarities. Such matching could also be explored to improve the quality of various forms of caregiving in fields such as health care, education, customer service, insurance claim adjusting, personal finance, and public sector casework.51 This illustrates another facet of Malone’s superminds concept: Algorithms can serve not only as human collaborators, but also as human connectors.
As Neils Bohr and Yogi Berra each said, it is very hard to predict—especially about the future. This essay is not a series of predictions, but a call to action. Realizing the full benefits of AI technologies will require graduating from a narrow “substitution” focus on automating tasks to a broader “superminds” focus on designing and operationalizing systems of human-machine collaboration.
The superminds view has important implications for workers, business leaders, and societies. Workers and leaders alike must remember that jobs are not mere bundles of skills, and nor are they static. They can and should be creatively reimagined to make the most of new technologies in ways that simultaneously create more value for customers and more meaningful work for people.52
To do this well, it is best to start with first principles. What is the ultimate goal of the job for which the smart technology is intended? Is the purpose of a call center to process calls or to help cultivate enthusiastic, high-lifetime-value customers? Is the purpose of a radiologist to flag problematic tumors, or to participate in the curing, counseling, and comforting of a patient? Is the purpose of a decision-maker to make predictions, or to make wise and informed judgments? Is the purpose of a store clerk to ring up purchases, or to enhance customers’ shopping experiences and help them make smart purchases? Once such ultimate goals have been articulated and possibly reframed, we can go about the business of redesigning jobs in ways that make the most of the new possibilities afforded by human-machine superminds.
An analogy from MIT labor economist David Autor conveys the economic logic of why characteristically human skills will remain valued in the highly computerized workplaces of the future. In 1986, the space shuttle Challenger blew up, killing its entire crew. A highly complex piece of machinery with many interlocking parts and dependencies, the Challenger’s demise was due to the failure of a single part—the O-ring. From an economist’s perspective, the marginal utility of making this single part more resilient would have been virtually infinite. Autor states that by analogy:
In much of the work that we do, we are the O-rings … As our tools improve, technology magnifies our leverage and increases the importance of our expertise, judgment, and creativity.53
In discussing the logic of human-machine superminds, we do not mean to suggest that achieving them will be easy. To the contrary, such forces as status quo bias, risk aversion, short-term economic incentives, and organizational friction will have to be overcome. Still, the need to overcome such challenges is common to many forms of innovation.
A further challenge relates to the AI paradox: Organizations must learn to better measure, manage, and reward the intangible skills that come naturally to humans but at which machines flounder. Examples include empathy for a call center operator or caregiver; scientific judgment for a data scientist; common sense and alertness for a taxi driver or factory worker; and so on. Such characteristically human—and often under-rewarded—skills will become more, not less, important in the highly computerized workplaces of the future.
By Joe Ucuzoglu, CEO, Deloitte US
The unique capabilities of humans matter now more than ever, even in the face of rapid technological progress. In the C-suite and boardrooms, a range of complex topics dominate the agenda: from understanding the practical implications of AI, cloud, and all things digital, to questions of purpose, inclusion, shareholder primacy versus stakeholder capitalism, trust in institutions, and rising populism—and now, the challenges of a global pandemic. In all of these areas, organizations must navigate an unprecedented pace of change while keeping human capabilities and values front and center.
We know from recent years of technological advancement that machines are typically far better than people at looking at huge data sets and making connections. But data is all about the past. What is being created here in the Fourth Industrial Revolution—and in the era of COVID-19—is a future for which past data can be an unreliable guide. Pablo Picasso once said, “Computers are useless. All they can do is provide us with the answers.” The key is seeing the right questions, the new questions—and that’s where humans excel.
What’s more, the importance of asking and answering innovative questions extends up and down entire organizations. It’s not just for C-suites and boardrooms, as Jim Guszcza and Jeff Schwartz share in their examples. It’s about effectively designing systems in which two kinds of intelligence, human and machine, work together in complementary balance, forming superminds.
Embracing the concept of superminds and looking holistically at systems of human-machine collaboration provides a way forward for executives. The question is, “What next?” The adjustments all of us have had to make in light of COVID-19 show that we are capable of fast, massive shifts when required, and innovating new ways of working with technology. Eventually, this pandemic will subside, but the currents of digital transformation that have been accelerated out of necessity over the past few months are likely to play out for the rest of our working lives.
How will your organizations become a master of rapid experimentation and learning, of developing and rewarding essential human skills, and of aligning AI-augmented work with human potential and aspirations?