In our second annual survey, early adopters remain bullish on cognitive technologies’ value and are ramping up AI investments. But questions linger about risk management—and about who at the company will push projects forward.
For the second straight year, Deloitte surveyed executives in the US knowledgeable about cognitive technologies and artificial intelligence,1 representing companies that are testing and implementing them today. We found that these early adopters2 remain bullish on cognitive technologies’ value. As in last year’s survey, the level of support for AI is truly extraordinary. Our analysis uncovered three main findings:
These findings illustrate that cognitive technologies hold enticing promise, some of which is being fulfilled today. However, AI technologies may deliver their best returns when companies balance excitement over their potential with the ability to execute.
To obtain a cross-industry view of how organizations are adopting and benefiting from cognitive computing/AI, Deloitte surveyed 1,100 IT and line-of-business executives from US-based companies in Q3 2018. All respondents were required to be knowledgeable about their company’s use of cognitive technologies/artificial intelligence, and 90 percent have direct involvement with their company’s AI strategy, spending, implementation, and/or decision-making. The respondents represent 10 industries, with 17 percent coming from the technology industry. Fifty-four percent are line-of-business executives, with the rest IT executives. Sixty-four percent are C-level executives—including CEOs, presidents, and owners (30 percent), along with CIOs and CTOs (27 percent)—and 36 percent are executives below the C-level.3
A year later, and the thrill isn’t gone. In Deloitte’s 2017 cognitive survey, we were struck by early adopters’ enthusiasm for cognitive technologies.4 That excitement owed much to the returns they said cognitive technologies were generating: 83 percent stated they were seeing either “moderate” or “substantial” benefits. Respondents also said they expected that cognitive technologies would change both their companies and their industries rapidly. In 2018, respondents remain enthusiastic about the value cognitive technologies bring. Their companies are investing in foundational cognitive capabilities, and using them with more skill.
View the infographic
View the 2017 survey
Browse the AI & cognitive technologies collection
Subscribe to receive analytics content
Compared with their counterparts in typical companies,5 our early-adopter respondents have high—and growing—penetration rates of key cognitive technologies:
What’s behind the growth of cognitive technologies among early adopters,9 especially the popularity of sophisticated technologies such as deep learning? One answer is investment. Thirty-seven percent of respondents say their companies have invested US$5 million or more in cognitive technologies. Another reason is that companies have more ways to acquire cognitive capabilities, and they are taking advantage. Nearly 60 percent are taking what is perhaps the easiest path:10 using enterprise software with AI “baked in” (see figure 1).
More respondents gain cognitive capabilities through enterprise software, such as CRM or ERP systems, than by any other method. These systems have the advantage of access to immense data sets (often their own customers’ data), and can often be used “out of the box” by employees with no specialized knowledge.
The cognitive tools available through enterprise software are often focused on specific, job-related tasks. While this can make them less flexible, they may be impactful nonetheless. For example, Salesforce Einstein can help sales reps determine which leads are most likely to convert to sales, and the optimal time of day to contact those prospects. Moreover, vendors continually develop advanced tools, which are gradually integrated into the software. Salesforce recently developed an advanced NLP model for handling multiple use cases that typically require different models.11
The “easy path” will likely become even more attractive as software vendors and cloud providers develop AI offerings tailored to business functions. Google recently announced a set of prepackaged AI services aimed at contact centers and HR departments.12 SAP’s AI capabilities, which it collectively calls “Leonardo Machine Learning,” also include specific solutions such as cash management in finance, video analysis in brand management, and trouble ticket analysis in customer service. The need for companies to develop bespoke cognitive initiatives will likely decline as similar services enter the market.
Off-the-shelf can go only so far, however. Many companies will likely need to develop customized solutions to meet their lofty expectations for cognitive technologies. Here, too, there are tools to accelerate adoption. Many of the big cloud providers offer AI through an as-a-service model: Instead of having to build their own infrastructure and train algorithms, companies can tap into the technologies they need right away, and pay only for what they use. According to a recent Deloitte study, 39 percent of companies prefer to acquire advanced technologies such as AI through cloud-based services, versus 15 percent that prefer an on-premise solution.13 Indeed, the appeal of the AI-as-a-service model is reflected in its annual global growth rate, which is estimated at a remarkable 48.2 percent.14
Cloud-based deep-learning services can give companies access to immense—and previously costly—computing power necessary to extract insights from unstructured data. They can also manage large data sets and accelerate app development with pretrained models.
While there are myriad ways for companies to access ready-made AI or develop their own, many also seek outside expertise. Fifty-three percent of respondents codevelop cognitive technologies with partners, and nearly 40 percent use crowdsourcing communities such as GitHub.
Through cloud services and enterprise software, companies can try cognitive technologies and even deploy them widely, with low initial cost and minimal risk. The growing number of cloud-based options may explain the spike in pilots and implementations between 2017 and 2018. Fifty-five percent of executives say their companies have launched six or more pilots (up from 35 percent in 2017), and nearly the same percentage (58 percent) claim that they have undertaken six or more full implementations (up from 32 percent).
Many early adopters are investing in cognitive technologies to improve their competitiveness. Sixty-three percent of surveyed executives said their AI initiatives are needed to catch up with their rivals or, at best, to open a narrow lead (see figure 2).
And the linkage between adept application of AI and competitive advantage appears to be growing stronger. Eleven percent said that adopting AI is of “critical strategic importance” today, but 42 percent believe it will be critical two years from now. This is a small window for companies to hone their AI strategies and skills, and they believe their success depends upon getting it right. Executives are becoming more realistic about the time this will require, however. In our 2018 survey, 56 percent of respondents said cognitive technologies would transform their companies within three years, down from 76 percent last year. The same was true of industrywide transformation: 37 percent of our 2018 respondents think it will happen within three years, 20 points lower than in 2017. We believe executives are acknowledging the complexity of using cognitive technologies to drive change across lines of business, without despairing of attaining that goal.
Many companies’ AI goals extend well beyond ROI. Positive ROI, however, can build momentum for future investment and generate support for executive champions of AI, and the technologies seem to be delivering. In our survey, 82 percent said they have gained a financial return from their AI investments. For companies across all industries, the median return on investment from cognitive technologies is 17 percent. Some are more adept than others at turning investment into financial benefits (see figure 3).
While these returns are estimates based on self-reported data, they show that executives across industries feel they’re getting value from cognitive technologies. Tech companies are spending significantly on cognitive, and getting a strong return. They are also the driving force behind cognitive technologies, developing them for a market already estimated at US$19.1 billion globally.15 This includes giants such as Google, Microsoft, and Facebook, and literally thousands of startups.16 AI has also generated returns by improving operations and delivering superior customer experience. Netflix found that if customers search for a movie for more than 90 seconds, they give up. By using AI to improve search results, Netflix prevents frustration and customer churn, saving US$1 billion a year in potential lost revenue.17
Robust returns are not limited to tech companies. Both established manufacturers and innovative startups are using AI to make manufacturing more efficient. For example, industrial firms, such as GE and Siemens, are taking advantage of the data in “digital twins” of their machines to identify trends and anomalies, and to predict failures.18
Companies such as these are using AI to improve business processes, which are prominent benefits companies seek. In fact, our survey findings suggest that companies are placing increased emphasis on internal operations (see figure 4).
This shift toward internal operations has been accompanied by a somewhat reduced emphasis on integrating AI into existing products and services, although that remains the most popular objective. In fact, operational change is often required before such integration can take place. Our respondents may be realizing that they should make operational changes first.
Health care and life sciences companies are investing in AI but, according to our data, have less to show for it. Certainly, some health care “big bang” projects have disappointed thus far. However, advances in fields as diverse as radiology and hospital claims management show that AI offers substantial potential for value in health care,19 despite some high-profile stumbles. For example, in a recent study, deep-learning neural networks identified breast cancer tumors with 100 percent accuracy by analyzing pathology images.20 Such advances, however, are thus far only in the lab and will take time before entering clinical practice.
Despite the hype AI generates, many executives are excited—not wallowing in a trough of disillusionment. That’s translating into investment. Eighty-eight percent of companies surveyed plan to increase spending on cognitive technologies in the coming year; 54 percent say they will boost spending by 10 percent or more.
Earlier, we noted that eight in 10 surveyed executives claim positive ROI from their companies’ AI efforts. However, we should view ROI claims with a bit of caution: Less than 50 percent of surveyed companies measure key performance indicators necessary for gauging financial returns accurately. These indicators include critical elements such as project budget/cost, ROI, and targets for productivity, cost savings, revenue, and customers (such as satisfaction and retention). This lack of measurement gets to the heart of a significant problem with cognitive implementations: They are often not managed with the same rigor that companies use with more mature technologies.
Business and technology leaders confront an array of challenges as they seek to create business value with artificial intelligence. Many respondents cited implementation, integration into roles and functions, and measuring and proving the business value of AI solutions as top challenges of AI initiatives (see figure 5). Implementation can be a challenge with any technology, but given the relative newness of AI tools and the low levels of experience with them, it’s unsurprising that this was the most-cited challenge. Integration into the business is a challenge for technologies in general, but it may be particularly problematic with AI given the impact it can have on knowledge-worker tasks and skills.
Companies sometimes struggle in AI projects to navigate the “last mile” of behavior change.21 An example we have seen is an organization that built a machine-learning system to support the sales team by predicting which prospects were likely to convert and which customers were likely to churn. Though the system worked as planned, the sales team was initially unprepared to accept its recommendations. The team had not been closely involved in the development of the solution and neither understood nor trusted the results it produced. One way to avoid this problem is to involve business owners closely throughout the development process so they can better understand what is being delivered.
Anyone following business news about AI knows about the critical role played by data. Survey respondents consider “data issues” as one of the top challenges for their companies’ AI initiatives. There are numerous reasons for this. Some AI systems, such as virtual assistants to enable customer self-service, require data from multiple systems that may never have been integrated before. Customer information may reside in one system, financial data in another, and virtual assistant training and configuration data in a third. AI creates a need for data integration that a company may have managed to avoid until now. This can be especially challenging in a company that has grown by acquisition and maintains multiple, unintegrated systems of diverse vintages.
Another challenge for companies is that the type of data required for some AI projects is different from the data with which they’re accustomed to working. For example, some solutions depend on access to significant amounts of unstructured data that may have been retained for record-keeping but was never intended for analysis. In one virtual assistant project we know of, the team needed to review thousands of recorded phone calls to identify common themes with which to derive rules for the system. (It is possible to automate this analysis, but that would be an AI project of its own.)
Getting the data required for an AI project, preparing it for analysis, protecting privacy, and ensuring security can be time-consuming and costly for companies. Adding to the challenge is that data—at least some of it— is often needed before it is even possible to conduct a proof of concept. We have seen companies that, because they had not fully considered the difficulty of obtaining the data they need, decided to shelve projects and disband teams until they were able to lay the proper data foundation.
Some organizations also struggle to articulate a business case or to define success for AI projects. This may be because AI is viewed as experimental. Sometimes it is because machine learning—one of the most widely used AI technologies—is inherently probabilistic, meaning that a new system’s ultimate performance can be difficult to estimate precisely. And sometimes it’s because the group charged with developing an AI solution is unaccustomed to developing business cases to justify its work.
It is a fact of life that novel situations often present new risks. The same is true of emerging technologies such as AI. Executives are concerned about a host of risks associated with AI technologies (see figure 6). Some of the risks are typical of those associated with any information technology; others are as unique as AI technology itself.
Chief among the AI risks that concern executives are cyber risks, which ranked as a top-three concern for half of our survey respondents (see figure 6). In fact, 23 percent of respondents ranked “cybersecurity vulnerabilities” as their No. 1 overall AI/cognitive concern. This apprehension is probably well placed: While any new technology has certain vulnerabilities, the cyber-related liabilities surfacing for certain AI technologies seem particularly vexing.
Researchers have discovered that some machine-learning models have difficulty detecting adversarial input—that is, data constructed specifically to deceive the model. This is how one research team fooled a vision algorithm into classifying as a computer what appeared to be a picture of a cat.22 The process of training machine-learning models can itself be manipulated with adversarial data. By intentionally feeding incorrect data into a self-learning facial recognition algorithm, for instance, attackers can impersonate victims via biometric authentication systems.23 In some cases, machine-learning technology may expose a company to the risk of intellectual property theft. By automatically generating large numbers of interactions with a machine-learning-based system and analyzing the patterns of responses it generates, hackers could reverse-engineer the model or the training data itself.
AI has also been used recently to create fake photos and videos of celebrities and politicians. While there are also techniques for identifying fakes, it appears that technologies may fuel an arms race of fake image development versus detection. Given the prominence of AI-based image recognition, this area is likely to be a cyber-risk battleground in the future.
There is evidence that cyber-risk concerns are slowing or pausing AI projects at some companies. In addition, one in five respondents said they decided not to launch AI initiatives due to cybersecurity worries (see figure 7).
Executives are commonly concerned about the safety and reliability of AI systems as well. Forty-three percent of respondents rated “making the wrong strategic decisions based on AI/cognitive recommendations” as a top-three concern (see figure 6). Nearly as many cited failure of an AI system in a mission-critical or life-or-death situation. Placing strategic decisions or mission-critical actions entirely in the hands of an AI system would certainly entail special risks. Entrusting AI systems with such responsibilities remains rare today, however. A prominent exception is the use of AI in autonomous vehicles: The technology has been implicated in several accidents, some fatal, during testing.24
Another element of cyber risk that companies should consider is how much data—and what kind of data—they are willing to put into public cloud environments, allowing them to use cognitive technologies to analyze much larger data sets than private clouds. Analysis of sensitive customer and financial data can yield valuable insights, but companies should weigh the perceived risks with the benefits. A recent Deloitte study found that the more experience organizations have with cloud computing, the more comfortable they are putting sensitive data into public clouds.25
Products and systems of all types, including IT systems, present a range of legal and regulatory risks. As a result, it is unsurprising that four in 10 survey respondents indicate a high degree of concern about the legal and regulatory risks associated with AI systems. Because not all methods of validating AI systems’ accuracy and performance are reliable, companies will need to manage the legal, regulatory, and operational risks associated with these systems. Complicating matters are questions surrounding who can be held liable in the event of an AI-related crime or mishap. How liability is assigned in these cases is a topic of ongoing discussion.26
Two themes are particularly salient when it comes to AI and regulatory risk: privacy and explainability. Because data is so critical to AI, companies seeking to apply the technology are often hungry for the stuff. Privacy regulations governing personal data may dampen their appetite, though: The General Data Protection Regulation (GDPR), which has recently come into force in Europe, sets privacy rules that require careful implementation. GDPR also mandates that companies using personal data to make automated decisions affecting people must be able to explain the logic behind the decision-making process.27 Guidance published by the US Federal Reserve (SR 11-7) affects US banking similarly: It requires that the behavior of computer models be explained.28 What makes these regulations challenging for some AI adopters is the growing complexity of machine learning and the increasing popularity of deep-learning neural networks, which can behave like black boxes, often generating highly accurate results without an explanation of how these results were computed. Many tech companies and government agencies are pouring resources into improving the “explainability” of deep-learning neural networks.29
For most of our respondents, ethical risks are not a top-of-mind information technology concern. And while ethical risks ranked at the bottom of risk concerns in our survey, about a third of executives did cite them as a top concern.
In a deeper look at potential ethical risks, surveyed executives revealed a wide range of concerns. At the top of the list is AI’s power to help create or spread false information. This may be due to the attention that social-media-driven “fake news” received in the 2016 US elections.
Some of the ethical risks that resonated with our respondents are linked to the aforementioned cyber-safety and regulatory issues: unintended consequences, misuse of personal data, and lack of explanation for AI-powered decisions. But there is one concern that has achieved special prominence in recent years, and ranked second among our respondents’ ranking of ethical risks: bias.
Today, algorithms are commonly used to help make many important decisions, such as granting credit, detecting crime, and assigning punishment. Biased algorithms, or machine-learning models trained on biased data, can generate discriminatory or offensive results. For example, one study found that ads for high-paying jobs were shown more often to men than to women.30
Do early adopters have the talent to develop and deploy cognitive solutions? The overall survey results suggest both a considerable amount of talent already and a strong demand for more. “Lack of AI/cognitive skills” was a top-three concern for 31 percent of respondents—below such issues as implementation, integration, and data. A skills shortage was identified as the biggest challenge in moving from prototypes to full production deployments for only 8 percent of respondents.
Companies generally feel that they have substantial AI capabilities. About four in 10 executives report their companies have a high level of sophistication in managing and maintaining AI solutions, selecting AI technologies and technology suppliers, integrating AI technology into the existing IT environment, identifying valuable applications of AI, building AI solutions, and hiring and managing technical staff with AI skills. An additional 41–46 percent say their companies have familiarity with these activities. This suggests that they don’t have a severe talent shortage.
In addition to internal resources, many companies are pursuing broad approaches to the “talent ecosystem.” Ten percent of our survey respondents said they get talent from companies they have acquired, invested in, or partnered with. And as we’ve seen, companies effectively outsource some of their talent needs by using either AI-as-a-service capabilities, or crowdsourced development communities such as GitHub and Bitbucket.
Despite the sophistication of their internal teams and access to outside talent, executives feel they need more skilled people. Thirty percent said they face a major (23 percent) or extreme (7 percent) skills gap. Another 39 percent said their gap is “moderate.” Interestingly, the most advanced companies in our survey feel the skills gap acutely.31 The limitations of their technical skills may be exposed as they launch more AI solutions, and as those solutions increase in complexity and scale.
Some skills are needed more than others (see figure 8).32 Respondents report the highest level of need for AI researchers to invent new kinds of AI algorithms and systems. This suggests an aggressive level of ambition for the technology. In addition, 28 percent said they need AI software developers, 24 percent need data scientists, and roughly similar percentages need user-experience designers, change-management experts, project managers, business leaders, and subject-matter experts. Sixty-one percent are already training IT staff to deploy AI/cognitive solutions, and 54 percent are training developers to create new AI/cognitive solutions.
Given the level of enthusiasm and aggressive adoption of AI/cognitive that we have seen in responses to other aspects of this survey, the findings on talent are perhaps unsurprising. They suggest that while talent is not the most important concern of the moment, there is an ongoing need to hire and train highly capable AI experts. Companies with this kind of commitment to AI/cognitive technologies will probably be living with skills gaps for a long time.
Although AI projects often flounder because the relevant technology skills are in short supply, organizations should recognize that success depends on more than technology talent. For example, data scientists often struggle when they aren’t clear on the business problem they’re supposed to solve. The result can be AI projects that go nowhere, and disillusioned data scientists who defect to competitors.33 Subject-matter experts who can “speak data” to data scientists while “speaking business” to executives can be valuable, yet only 20 percent of companies in our survey said they were needed. And given their struggles to implement AI solutions and manage projects, it’s surprising that only 22 percent stated the need for each of these skills.
Companies in our survey are searching for the right balance between using AI to automate tasks (cutting costs—and jobs) and to augment the capabilities of its workforce. AI-driven automation is not seen as a top benefit of AI. “Reduce headcount through automation” received the lowest ranking among the list of “primary benefits from AI/cognitive technology” choices, with 24 percent of respondents rating it among their top three choices.
That said, there is evidence that many companies plan to automate tasks and cut jobs. Sixty-three percent of respondents agreed with the following statement: “To cut costs, my company wants to automate as many jobs as possible with AI/cognitive.” In many companies we have observed, the business case for some cognitive projects, such as chatbots, relies heavily on using AI to replace workers.
Sixty-three percent of respondents agreed with the following statement: “To cut costs, my company wants to automate as many jobs as possible with AI/cognitive.”
While our survey did not directly address the magnitude of job losses due to AI, 36 percent of respondents feel that job cuts from AI-driven automation rise to the level of an ethical risk. In some industries, such as financial services, executives have been candid about their plans to automate tens of thousands of jobs in the next few years.34 Perhaps some losses have already happened. The number is likely to rise as cognitive technologies become more firmly entrenched.
The threats to existing workers are not only from automation-related job loss. The great majority of survey respondents agree that AI leads to either moderate or substantial changes in job roles and skills, both already (72 percent) and in three years (82 percent). But perhaps of greatest concern for the currently employed should be executives’ preference for new hires with the required skills over retraining and retaining current workers. Only 10 percent of respondents stated a clear preference for retraining and keeping current employees. Eighty percent leaned toward either “keeping or replacing employees in equal measure” or “primarily replace current employees with new talent.”
Despite the threats to current employees due to AI, executives believe that cognitive technologies will make the remaining and newly added workers better at their jobs, and happier at the same time.35 Seventy-eight percent of survey respondents agree that AI/cognitive technologies empower people to make better decisions, and 72 percent believe that AI will increase job satisfaction. Perhaps the biggest advantage could be new ways of working that blend the best of what machines do with human experience, judgment, and empathy; 78 percent of executives believe that AI-based augmentation of workers will fuel new ways of working.
This is no time for American workers to be complacent. While automation is not a top priority for many companies, it still looms as a growing threat. In addition, many companies are looking to acquire new AI-related skills with outside talent.
For the second year in a row, we’ve seen that early adopters are using cognitive technologies to effect positive change for their companies. Overall, their reactions to this new set of tools are remarkably bullish. Though they face challenges, many of the companies we surveyed are having early success integrating AI within their operations and customer relationships—and getting economic benefits. They are enthusiastic about their successes to date, and about the potential for these technologies to transform their companies in the near future. We believe the excitement early adopters express about AI is warranted. We also think that early adopters—and companies that want to emulate them—could have a surer path to success if they take the following steps.
Early adopters should combine their experimentation and industrious—even frenetic—activity with better operational discipline. Despite its complexity and transformational potential, AI’s implementation resembles that of other technologies. To drive change across lines of business, companies should focus on project management and change management. The fundamentals of fostering organizational change can get lost amid excitement around pilots, grassroots experiments, and vendor-driven hype. Some management infrastructure around AI is being created; our survey results reveal the following indicators of structures and processes to improve execution:
However, to ensure that cognitive remains a top priority once the hype subsides, leaders of AI initiatives should ensure that costs and impacts are tracked carefully, and that successes are incontrovertible. This will help CFOs make the investments required as projects—and budgets—get bigger.
The problems that early adopters have with cybersecurity help to make their execution problems clear. Less than half are building cybersecurity into their AI projects. This despite the fact that cybersecurity is the top risk cited by executives in our survey: They fear that both the algorithms that deliver insights, and the data that fuels those algorithms, are vulnerable to attack. And recall that respondents’ biggest concern about the ethics of AI is the falsification of images and bots that create “fake news”—both of which are easier to propagate when cybersecurity preparedness is low. The worst-case scenario—such as autonomous vehicles getting hacked—can have life-threatening ramifications.
We’ve seen that cybersecurity is having a negative impact on some early adopters—32 percent have suffered an AI-related breach. Some companies are also slowing down or halting AI initiatives due to cybersecurity concerns. Others are forging ahead despite them. Neither is ideal: One leads to slower implementation and reduced competitiveness, the other to unnecessary risks. No cybersecurity efforts can prevent every attack, but early adopters can improve their defenses by incorporating security from the beginning of the process, and making it a higher priority.
Advances are being made to reduce risks associated with AI—for instance, forensic technology is getting better at detecting manipulated images and videos, known as “deep fakes.”36 The explainability of deep-learning models will likely also improve, helping companies to avoid regulatory noncompliance and other risks associated with bias in algorithms. Companies should stay on top of these developments and incorporate them when they are proven.
The top three cognitive use cases in our study—IT automation, quality control, and cybersecurity—are largely focused on IT (see figure 9). These are important use cases—especially cybersecurity (in this case, tackling cybersecurity with AI). IT automation is showing early promise, according to some studies.37
It makes sense that complex technologies, which require heavy involvement from the IT department, would be applied there first. But the transformative potential of AI will likely be reached only when it permeates an entire company and enables change in multiple business functions and units. Cloud can play a pivotal role in achieving those objectives via services that provide broad ranges of users with easy access to AI-based capabilities.
Enterprise software and cloud services give companies expanding options for adopting cognitive, without taking the tortuous path of having to build everything from scratch. Although cognitive technologies are still evolving, this evolution is happening at a breakneck pace. Cloud-based CRM and ERP software with cognitive capabilities are widely available, as are chatbots. Many big cloud providers are developing subscription-based AI services aimed at specific business functions. This may be the easiest path to getting the benefits of AI into functions such as product design and sales and marketing.38
For companies that want to develop their own solutions, tools such as automated machine learning can also boost the capabilities of “ordinary” programmers. AWS wants to democratize AI so that programmers without specific AI training can use it. Google and the startup DataRobot have similar ambitions with their automated machine-learning offerings. Surely, companies need expertise within their “four walls,” but they should examine which capabilities they can get from enterprise software and cloud-based platforms. This can lead to quick wins, lower initial investment, and momentum.
There are many potential holes in any company’s AI capabilities. But focusing only upon the hardest talent to attract and retain—AI researchers and programmers, and data scientists—may not be the best strategy, especially for companies just starting out. AI newcomers may want to see how far they can get using off-the-shelf solutions and cloud platforms. Partners and consultants can also provide much-needed expertise and guidance, and most of our early adopters are using them.
Companies also need sound strategies for talent development and acquisition. While many early adopters say they are training their employees for the new roles and skills AI requires, they prefer to hire new workers from outside their organizations. Both approaches may be required, especially for scarce technical skills. We believe that there is more potential for retraining current employees to work alongside smart machines than the survey respondents seem to, although it’s important to start early with such initiatives.
But when considering which AI development and implementation skills a company should have in-house, companies should consider the mix of talent needed to manage AI projects successfully. Focusing too much on acquiring high-cost, scarce talent that the tech giants are scooping up in a bitter arms race39 may lead to frustration and disappointment. Companies that are developing their own bespoke cognitive solutions need significant technical talent. But they also likely need business executives who can “speak AI” to data scientists and understand the uses and limitations of data analytics.
Companies that automate simply to cut costs or improve efficiency are not taking full advantage of AI. Yet there are clear use cases where automation is simply better and more efficient than humans. In these use cases, machines will likely eventually replace people altogether. In many more instances, machines will surface information, make predictions, and offer alternatives. Humans, using judgment, empathy, and business skill, should apply this information to best effect. This is a matter not simply of placing humans in the loop but of the loop being built to augment human decision-making.
Knowing where companies want “automation to replace” and where they want “intelligence to augment” will likely help them be clear about how they will change operations, and what kind of people they need to recruit—and cut. It’s naïve to think that AI won’t cost jobs, and some CEOs are becoming more frank about admitting this.40 However, exclusive focus on automation and cost-cutting could hinder chances to use AI for transformative efforts that leverage the best of artificial and human intelligence.41 It could also fuel distrust and fear among employees who may be waiting for the other shoe to drop.
Our survey results clearly show that growing numbers of companies are becoming more sophisticated in their usage of AI technologies. Now is the time for organizations to start selecting the business use cases that can deliver measurable value through AI-powered capabilities. By using cloud services as a gateway, it’s never been easier to explore and access AI’s potential—with minimal up-front investment and a reduced need for in-house expertise.