Why hasn’t AI delivered on its promise? has been saved
Cover image by: Jim Slatton
Limited functionality available
AI exploded out of research some ten years ago, promising to deliver all manner of science fiction solutions. From autonomous cars and perfect prediction machines1 for business, through to ushering in the singularity (where machine intelligence accelerates past human intelligence).2 Pundits predicted systemic disruption as AI eliminated the need for humans in many fields of endeavor. Some even went so far as posit that we should stop training professions such as radiographers,3 as AI would soon be so superior that human radiographers would find it impossible to compete.
Despite all this promise, adoption of AI is not where many expected (or hoped) that it might be. Research continues to improve the underlying AI technology—the recent of development of both Midjourney and Stable Diffusion4 is a case in point—and firms continue to invest in AI.5 We even saw a bump in investment during the first couple of years of the pandemic.6 However, a majority of AI projects fail.7 Compelling demonstrations are not transitioning into value creating solutions. Autonomous cars are a prime example, where commercial, mass market, versions constantly seem to be a decade away, despite early success and significant investment. We hear a similar story from AI practitioners working in firms attempting to leverage AI, with carefully developed models and solutions left on the bench as they are either not compelling enough or too fragile to replace existing solutions. There are notable successes, such as machine language translation, however there appears to have been more misses.
It’s possible that broad adoption of AI is being held back by the usual human adoption concerns, such as a lack of the technical implementation skills required, resistance to organizational change, and potential job displacement. Or it might be the case that deploying AI solutions requires significantly more work than anticipated, and it will take some time to work through many issues. We might even ask if AI is not the general-purpose technology8 many assumed it to be, as successful point solutions have not evolved into general business platforms (the way computers, once used in offices for discrete, complex calculations, eventually became part of the business infrastructure9). AI might be a solution to particular types of problems, but we’ve yet to develop the ability to identify the types of problems it is best applied to.10
Before we write off AI though, we might ask if the disappointment in AI is due to a failure in the technology being able to develop quickly enough11 (or as quickly as we’d like), or if some other factor is at play. There has been some speculation that the shortfall in AI realization is due to an inability to translate theory into practice—something akin to the technology commercialization chasm that plagues academic research. This is an unsatisfying explanation, though, as there is no indication that AI is different from other technology domains that don’t suffer similar problems.12
Another possibility is that the emergence of AI was due to the development of complementary technologies—innovations that AI relies on—rather than developments in the core AI technologies themselves. For example, Deep Blue, the solution that triggered the computational chess revolution, could be attributed to a rapid drop in the cost of computing13 along with discovery (by AI researchers) of the Markov Decision Process,14 rather than a key development in the core AI techniques used.
Many (if not all) recent AI developments are at least equally due to the development or commoditization of other technologies, as the development of the core technology itself. The surprise emergence of machine translation in mid-1990s was more likely the result of dropping computation costs and access to digital, multilingual texts, rather than a significant improvement in the underlying theory.15 Similarly, autonomous cars went from research oddity16 to potentially transformative technology due to the development of portable LiDAR17 sensors and powerful portable computers18 that were both lightweight and had miserly electricity requirements.
Technology comes in packages, as any practical solution relies on a range of complementary technologies as well as the key innovation.19 Consider how the first telegraphs (used to dispatch trains) were not much more than a battery, switch, two conductors, electromagnet, striker, and bell—a package of six complementary technologies.20 New technologies rely on packages of earlier technologies, creating dependencies from our current technologies to their earlier, simpler, building blocks. Over time, these packages of technologies have grown in size. The modern equivalent of the telegraph—sending an emoji via text message—depends on a huge package of technologies that includes mobile computing and digital radios along with global telecommunications networks.21 The complex digital environment we live in means any modern solution will have many dependencies.
In the past, solutions that depended on large packages of technologies were at a disadvantage. It was quite possible, for example, to create impressive AI-powered legal analysis solutions in the 80s and 90s, though these solutions were hamstrung by the need to manually enter information from paper documents before the solution could do its work. Lack of easy access to digital data means that the solution was left on the shelf, uneconomic. Today, on the other hand, the same data is most likely digital, likely stored in a cloud-based document management system that can be accessed via an API, and a solution that was technologically possible but not economically viable now makes sense as it can leverage the many technologies in our technology-rich environment.
Our approach to realizing AI’s promise in the enterprise is predicated on there being some remarkable new development in AI. Assuming that the core AI technology is racing ahead, firms have attempted to seize the opportunity by investing in developing internal capabilities and then targeting these capabilities at problems that seem suitable. Machine learning (ML), for example, is seen as a machine for perfect predictions (given sufficient data) so the challenge is to develop expertise in ML and then find predictions to make perfect.
This approach, after a few early wins, appears to be running out of steam.22 A marketing team, for example, might use machine learning to develop a new customer segmentation model. At first, the solution provides novel (and valuable) insights, but repeated use provides little benefit. Bringing a fresh pair of (machine) eyes to the problem was valuable, but the enduring benefits that were expected from “constantly optimizing the model” never eventuated. A similar dynamic can be seen in other AI research efforts. It’s straightforward, for example, to hire a team of AI graduates, build a prototype autonomous car, and provide compelling demonstrations of the car’s capabilities. It’s very challenging, though, to move past that first prototype. A general autonomous car, one that can operate in the same diverse range of environments that humans can, remains beyond our reach.23
What if, rather than focusing on particular AI technologies, we tease apart the packages of technologies that resulted in those technically possible solutions? Developments in the core AI technology likely have contributed to the solution’s success, but what if the success was also due to developments in complementary technologies?24 Our earlier legal example is a case in point. The barrier to adoption of the first legal solutions was easy access to suitable digital texts, a barrier that was overcome by a combination of the automation of legal business processes (making the documents digital) and the shift to cloud computing (making the digital documents easily accessible online). While the core AI technologies were improved in the intervening years, it was developments in the complementary technologies that transformed the solutions from technically possible to economically viable.
There are two consequences.
First, other AI techniques that haven't yet reached the limelight might also have transitioned from the technically possible to economically viable as they can leverage similar complements. An organization that has used robotic process automation (RPA) to provide digital access to all the steps in a business process, for example, might swap their business process engine25 for an AI-enabled real-time planning engine to dynamically manage business processes.26 This has the effect of eliminating (static) business processes (as processes are constructed dynamically by the planning engine), along with business exceptions (as each process is constructed to meet the particular needs of a single transaction).
Second, and more interesting, rather than look for particular (novel) AI technologies and then try to find potential problems that the technology might address, we can look across the enterprise to find challenges and opportunities where the changes to the complements enable new approaches, and then pull in a range of technologies (new and established) to address these challenges and opportunities.
Our natural recency bias27 leads us to focus on the novel and new. However, the history of big ideas tends to be a story of the drive to find a better way, coupled with the incremental development of otherwise banal technologies. The development of the global multi-modal container network is a great example28—the genesis of which was in 1956 when Marshall McLean (1914-2001) launched an all-container shipping line built on the realization that up to half of shipping costs were due to the manual handling required to move cargo between transport modes (from boat to truck to train to truck and so on).29 All-container shipping streamlined these mode changes to realize dramatic savings, and so usher in the next phase of globalization.
Big disruptive innovations tend to be the result of the incremental development of existing ideas which, at some point, come together in a way that amplifies their effect. The earlier legal example is one such instance, as is the story of machine translation. If we construct a figure that captures the technology novelty and the impact of the solutions, such as machine translation and the global multi-modal container network, then we find ourselves with the two-by-two shown in figure 1.
We tend to look for opportunity under the streetlight of technical novelty—in the lower-right quadrant of our two-by-two—trying to find those big new technologies that will change the world. This is often a fruitless search. A more productive approach might be to acknowledge that a significant factor in AI’s recent successes has been the development of our digital infrastructure. Rather than looking for opportunities to apply particular AI technologies that have been successful in the past, look for problems in our business environment where the operating context has changed significantly in the past five to ten years. These might well represent problems whose time has come, as we can now bring a range of digital tools to create innovative new solutions to old problems.
Consider call centers: In 2014, the then CEO of Telstra, Australia’s largest mobile phone company, made headlines with a forecast that within five years there would be no people in its call centers.30 AI-powered automation was forecast to replace human call center workers with digital ones. Since then, call centers have deployed a range of AI technologies to augment humans and streamline operations. IVR systems have been replaced with AI chatbots that use natural language processing to resolve simple queries while directing other calls to humans. Those same chatbots can answer voice calls thanks to improved voice recognition. AI-powered mood analysis can even alert human workers to attend to anxious or distressed callers. But despite these many improvements, the promised transformation has yet to come to fruition. Call centers continue to rely on human call center workers.
What if AI’s lack of progress is due to a lack of imagination? We might move beyond automating existing business practices to optimize or reinvent them, to integrate human and machine work in new ways. For example, it took 30 years from the initial electrification of factories for production engineers and managers to realize that electric power distribution could be used inside the factory, and consequently reorganize the factory floor to optimize workflow (rather than mechanical power distribution) leading to a 30% improvement in total productivity with the same machines, staff, and floor space.31 Similarly, AI enables us to rethink the call center. Rather than automate existing work practices, we can invent new ones, optimizing the call center or even redefining the role that call centers play in an organization, and the role human workers play in the call center.
To optimize a call center, given the current technologies and infrastructure we have at our disposal, we need to approach the work done there in a new way.32 Rather than chipping away at automating (some) worker tasks, we can blend the activities of humans and AI workers, envisioning work as a set of complementary behaviors and expertise (both AI and human) focused on framing and addressing a problem. Let’s call this behavior-based work.33
Behavior-based work can be conceptualized as a team standing around a shared whiteboard, each holding a marker, responding to new stimuli (text and other marks) appearing on the board, carrying out their action, and drawing their result on the same board. Some team members are human, while others are represented by AI behaviors. The whiteboard is a shared context34 against which both human and digital behaviors can operate. Contrast this with the task-based work of a traditional organization,35 which is more like a bucket brigade where the workers stand in a line and the “work” is passed from worker to worker on its way to a predetermined destination, with each worker carrying out their action as the work passes by.
The starting point in our contact center example might be a transcript of the conversation so far, created via a speech-to-text behavior. A collection of AI “recognize-client” behaviors monitor the conversation to determine if the caller is a returning client. This might be via voiceprint or speech-pattern recognition, or the client could state their name clearly enough for the AI to understand. They may have even provided a case number or be calling from a known phone number. Or the human worker might step in if they recognize the caller before the AI does. Regardless, the client’s details are fetched from case management to populate our shared context, the shared digital whiteboard, with minimal intervention.
As the conversation unfolds, AI behaviors use natural language to identify key facts in the dialogue. A caller mentions a dependent child, for example. These facts are highlighted for both the human and other digital behaviors to see, creating a summary of the conversation—the case being investigated—updated in real time. The worker can choose to accept the highlighted facts or cancel or modify them. Regardless, the human’s focus is on the conversation, and they only need to step in when captured facts need correcting, rather than being distracted by the need to navigate a case management system.
The whiteboard metaphor enables us to transform the problem we’re solving from the difficult one of automating human tasks (or even, creating an artificial human), to the much more tractable one of finding useful and complementary AI behaviors to add to the system.
This approach can be extended by applying our call center whiteboard metaphor to solve old problems in new ways. We might apply this technique to a forklift in a warehouse, for example, enabling us to create a new approach to forklift support and maintenance, with the forklift “calling in” when it needs support, and the human and AI workers collaborating around identifying and resolving its problem(s). Rethinking how to apply AI in this way enables us to identity three new possible approaches, for a total of four strategies for leveraging AI in the enterprise (figure 2).
Augment, the lowest level, is the familiar and common strategy of using AI to augment an existing task, such as leveraging the perfect predictions of AI to augment (or even replace) workers, to improve productivity. We might also eliminate waste, reducing costs, by streamlining tasks to eliminate waste—as the earlier example of swapping static processes for dynamic real-time planning did. Our call center and forklift examples take a more aggressive approach, optimizing work by reorganizing it along different lines. Late electrification of factories, and the invention of workflow, is the non-AI equivalent. Finally, we might imagine situations where a combination of the modern digital workplace and AI enable us to change what work needs to be done, to renegotiate our relationships with external actors (other teams, departments, firms, or even with our clients and customers) to create new opportunities, solve new problems, and unlock new value.
Common approaches to applying AI have us taking our AI hammer and looking for suitable nails to hit. While this has seen some success, it might be more productive to:
The promise of AI remains out of reach as automating an activity involves transforming it. The real opportunity, the path to unlocking the full value of AI, requires us to think about work differently.36
Despite setbacks after its early promise, there is still a lot of value to be realized from AI. Efforts to apply particular AI technologies to particular problems may be stalling, but there is a lot of fertile ground yet to explore.
The challenge is to think differently. Rather than focusing on particular AI technologies and assuming that the technology is racing ahead, we need to think in terms of packages of technologies where our assumptions about some of the complements might be wrong. Rather than looking for problems that a “hot,” new technology might address, we need to look for challenges or opportunities where changes to complements enable a new approach.
There will always be barriers. The larger the scope of a change, the larger the challenge. Existing practices can be also difficult to shift. However, if we’re clear and realistic about both opportunities inherent in challenges and barriers, then AI can create significant value.
See: Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Boston, Massachusetts: Harvard Business Review Press, 2018).View in Article
The assumption is that a self-improving artificial intelligence will rapidly evolve, with the rapid production of new and ever more intelligent generations. This “explosion” of machine intelligence will soon outrun human intelligence, becoming a superintelligence more capable than humanity.View in Article
Gary Smith and Jeffrey Funk, “AI has a long way to go before doctors can trust it with your life,” Quartz, June 5, 2021.View in Article
Midjourney and Stable Diffusion are machine learning tools that create images from textual prompts. See: www.midjourney.com and stablediffusion.fr. Photoshop plugins have already been published for these tools enabling AI generated elements to be directly integrated into composite images. See: Jim Clyde Monge, “Stable diffusion arrives in Photoshop—Here’s how to install,” CodeX, 2022.View in Article
The IBM Global AI Adoption Index 2022 showed investment to be stable, with 35% of firms using AI in their businesses. See: IBM, “IBM Global AI Adoption Index 2022,” Report, May 10, 2022.View in Article
Joe McKendrick, “AI adoption skyrocketed over the last 18 months,” Harvard Business Review, September 27, 2021.View in Article
Gartner predicted in 2019 that by 2022 some 85% of AI projects would fail. See: Andrew White, “Our top data and analytics predictions for 2019,” blog, Gartner, January 3, 2019. It’s difficult to support this prediction with data, likely as firms are often reluctant to report on failures. Figures between 60 and 80% are often quoted, which aligns with estimates from practitioner experience. This is roughly double the usual failure rate for IT projects.View in Article
Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines.View in Article
As the personal computer did, starting life as a means to access particular tools, such as spreadsheets, but has developed into a general-purpose business appliance.View in Article
Similar to quantum computing. Quantum computing is often described as “finding all solutions in parallel,” which is not accurate. It’s a different approach to computation, one that has its own benefits and problems, and one which might be more effective for some types of problems. The problem is that we’re not sure what these particular problems look like. There might be a similar dynamic with AI, where it is a powerful tool for a particular type of prediction problem, but not an effective general tool for all prediction problems.View in Article
Deloitte research found that it can take three years before many new AI solutions provide benefits. See: Deloitte, State of AI in the Enterprise, report, October 2022.View in Article
We might compare AI to relational databases, for example. The concepts behind relational databases were outlined by E. F. Codd at IBM in 1970, with the first research implementation (System R) developed by IBM in 1974. The first commercial solution, the Multics Relational Data Store, was developed by Honeywell with the first sale in June 1976. A number of other commercial solutions soon followed, such as Oracle (released in 1979 by Relational Software, which eventually became Oracle Corporation), with others following soon after, such as IBM’s Db2, SAP Sybase ASE, and Informix, leading to the database revolution of the 1980s.View in Article
The mid 1990s, when Deep Blue was developed and released, was a time of exceptionally rapid improvement in the price and performance of general-purpose computers.View in Article
The Markov Decision Process is a mathematical framework for modeling decisions that are partly random and partly under the control of the decision-maker. It was developed in the late 1950s by Richard Bellman, but only came to the attention of AI researchers in the 1990s. See: Richard Bellman, A Markovian decision process, Journal of Mathematics and Mechanics 6, no. 5, 1957, pp. 679–84.View in Article
The ideas behind machine translation were first discussed in the early 1950s while the theory was formalized in the 1980s. The first commercial solution was developed by Kharkov State University in 1991, and translated between Russian, English, German, and Ukrainian. The first web-based solution, SYSTRAN’s Babelfish, was available in 1996. Most people’s memories of machine translation are pegged to the launch of Google Translate in 2006.View in Article
The first truly autonomous car, the VaMoRs, drove down a highway in the 1980s. Damlier-Benz collaborated with Ernst Dickmanns on a project in celebration of the 100-year anniversary of the company’s first car, the Benz Motorwagen in 1886. The car was based on a five-ton Mercedes van Ernst Dickmanns and his team had built.View in Article
LiDAR (light detection and ranging) is a method for determining ranges for (distances to) objects by focusing a laser beam on them and measuring the time it takes for the light to bounce off the object and return to the LiDAR unit.View in Article
Rather than the minicomputers that were used in earlier autonomous driving prototypes.View in Article
Kranzberg's Third Law: Technology comes in packages, big and small. See Kranzberg, Melvin, Technology and history: ‘Kranzberg’s Laws,’ Technology and Culture 27, no. 3, 1986, pp. 544–60.View in Article
We might consider the encoding scheme—how words and concepts were translated into bell chimes—to be the core technology, while the other technologies mentioned are supporting or complementary technologies. Initially, this might have been as a simple as “dispatch the train when the bell rings,” though over time this evolved into more complex schemes such as Morse code.View in Article
Note that technology adoption might not be “faster” than in the past—it’s just that we now have more urban than rural, and technology diffuses more rapidly through urban than rural environments.View in Article
Which can be seen in early successes with tools such as machine translation and led to the high failure rate for projects based on AI.View in Article
This is not to say that reframing the problem would not have its benefits. Autonomous busses might represent a more tractable problem as the busses follow predictable routes. This would enable the built environment to be engineered to make it easier for the busses to navigate. Teams could be formed to support the busses by stepping in when the bus cannot cope, such as after an accident when a human presence is required to represent the bus. The authors have discussed this approach previously, see: Peter Evans-Greenwood, Alan Marshall, and Matthew Ambrose, Reconstructing jobs: Creating good jobs in the age of artificial intelligence, Deloitte Insights, July 18, 2018.; and Peter Evans-Greenwood, Alex Bennett, and Sue Solly, Negotiating the digital-ready organization, Deloitte Insights, March 30, 2022.View in Article
We might make a similar argument for the development of the steam and internal combustion engines, the development of which relied in part on the creation of sufficiently precise machine tools.View in Article
Where techniques such as pi (π) and psi calculus are used to compose processes by specifying how a set of tasks should be combined to provide the desired outcome.View in Article
A planning engine takes an initial state, a set of actions, and a goal, and computes the optimal sequence of actions required to reach the goal. An early and popular planning tool is the Stanford Research Institute Problem Solver (STRIPS), developed in 1971.View in Article
Recency bias is a cognitive bias that favors recent events over past ones.View in Article
The Box by Marx Levinson provides an easy and enjoyably introduction to the history of the development of the global multi-modal container network. See: Marc Levinson, The Box: How the shipping container made the world smaller and the world economy bigger, second edition (Princeton: Princeton University Press: 2016).View in Article
Port costs could represent from 10–50% of total shipping costs, making many commodities simply uneconomic to ship. See: “The World the Box Made” in The Box (Princeton: 2016), particularly Chapter 1, Figure 1.View in Article
Calum Chace, “The impact of AI on call centres,” Forbes AI, August 10, 2020.View in Article
Shifting from steam (coal) to electricity could save a firm 20–60% on their power generation costs, direct savings. But these savings were dwarfed by those obtained from reorganizing production, indirect savings, due to a 20–30% productivity improvement, while using the same floorspace, workers, machinery, and tooling. See: Warren D. Devine, “From shafts to wires: Historical perspective on electrification,” The Journal of Economic History 43 (2), 1983, pp. 347–72.View in Article
Adapted from, Peter Evans-Greenwood, Alan Marshall, and Matthew Ambrose, Reconstructing jobs: Creating good jobs in the age of artificial intelligence.View in Article
The authors first discussed this idea in Peter Evans-Greenwood, Alan Marshall, and Matthew Ambrose, Reconstructing jobs: Creating good jobs in the age of artificial intelligence.View in Article
We could consider this shared context to be a conversational digital twin of the problem to be solved, capturing what is known and unknown as the problem evolves.View in Article
Work is commonly framed in terms of the cascade: product, process, task, and skill. This framing might be part of what’s holding AI back, as it’s better thought of as a behavioral (stimulus-response) technology, than a task-performing one.View in Article
Peter Evans-Greenwood, Robert Hillard, and Alan Marshall, The new division of labor: On our evolving relationship with technology, Deloitte Insights, May 9, 2019.View in Article
Cover image by: Jim Slatton