How will artificial intelligence impact intel analysis and, specifically, the intelligence community workforce? Learn what organizations can do to integrate AI most effectively and play to the strengths of humans and machines.
In the last decade, artificial intelligence (AI) has progressed from near-science fiction to common reality across a range of business applications. In intelligence analysis, AI is already being deployed to label imagery and sort through vast troves of data, helping humans see the signal in the noise.1 But what the intelligence community (IC) is now doing with AI is only a glimpse of what is to come. These early applications point to a future in which smartly deployed AI will supercharge analysts’ ability to extract value from information.
Explore the Government & public services collection
Download the Deloitte Insights and Dow Jones app
Subscribe to receive related content from Deloitte Insights
The adoption of AI has been driven not only by increased computational power and new algorithms but also the explosion of data now available. By 2020, the World Economic Forum expects there to be 40 times more bytes of digital data than there are stars in the observable universe.2 For intelligence analysts, that proliferation of data means surefire information overload. Human analysts simply cannot cope with that much data. They need help.
Intelligence leaders know that AI can help cope with this data deluge but they may also wonder what impact AI will have on their work and workforce. According to surveys of private sector companies, there is a significant gap between the introduction of AI and understanding its impact. Nearly 20 percent of workers report experiencing a change in roles, tasks, or ways of working as a result of implementing AI, yet nearly 50 percent of companies have not measured how workers are being impacted by AI implementation.3 This article begins to tackle those questions, offering a tasks-level look at how AI may change work for intel analysts. It will also offer ideas for organizations seeking to speed adoption rates and move from pilots to full scale. AI is already here; let’s see how it will shape the future of intelligence analysis.
The term “artificial intelligence” can mean a huge variety of things depending on the context. To help leaders understand such a wide landscape, it is helpful to distinguish between the types of model classes of AI, and the applications of AI. The first are the classifications based on how AI works; the second is based on what tasks AI is set to do.
Intelligence flows through a five-step “cycle” carried out by specialists, analysts, and management across the IC: planning and direction; collection; processing; analysis and production; and dissemination (figure 2). The value of outputs throughout the cycle, including the finished intelligence that analysts put into the hands of decision-makers, is shaped to an important degree by the technology and processes used, including those that leverage AI.
Technologies such as unmanned aerial systems, remote sensors, advanced reconnaissance airplanes, the internet, computers, and other systems have supercharged the collection process to such an extent that analysts often have more data than they can process.4 Complicating matters, the data collected often resides in different systems and comes in different mediums, requiring analysts to spend time piecing together related information—or fusing data—before deeper analysis can begin.
Access to more data should be a good thing. But without the ability to fuse and process it, data can inundate analysts with mountains of incoherent pieces of information. The director of the National Geospatial Intelligence Agency said that if trends hold, intelligence organizations could soon need more than 8 million imagery analysts alone, which is more than five times the total number of people with top secret clearances in all of government.5 In the modern digitized age, where success in warfare depends on a nation’s ability to analyze information faster and more accurately than adversaries, data cannot go unanalyzed.6 But given the pace at which humans operate, there simply isn’t enough time to make sense of all the data and perform the other necessary intelligence cycle tasks.
AI can provide much-needed support. Intelligence agencies are already using AI’s power to sort through volumes of data to pull out critical “knowns” for further analysis. For example, agencies have used AI to automatically identify and label patterns of vehicles to identify SA-21 surface-to-air missile batteries or sift through millions of financial transactions to identify patterns consistent with illicit weapons smuggling. Similarly, the Joint Artificial Intelligence Center (the Department of Defense’s focal point for AI) is already working to develop products across “operations intelligence fusion, joint all-domain command and control, accelerated sensor-to-shooter timelines, autonomous and swarming systems, target development, and operations center workflows.”
Our analysis suggests AI operating in these capacities can save analysts’ time and enhance output. While exact time savings will depend on the type of work performed, an all source analyst who has the support of AI-enabled systems could save as much as 364 hours or more than 45 working days a year (figure 3). These savings can free up analysts to devote more time to higher-priority tasks or build skills through additional training, among other activities. (For more information on our methodology, see Appendix.)
The benefits of AI, however, can go far beyond time savings. After all, intelligence work never ends; there is always another problem that demands attention. So saving time with AI will not reduce the workforce or trim intelligence budgets. Rather, the greater value of AI comes from what might be termed an “automation dividend”: the better ways analysts can use their time after these technologies lighten their workload.
Indeed, research on industries from banking to logistics shows that the greatest benefit of automation comes from when human workers use technology to “move up the value chain.”7 Put another way, they spend more time performing tasks that have greater benefit to the organization and/or customer. For example, when automation freed supply chain workers from tasks such as measuring stock or filling in order forms, they could redeploy that time to create new value by matching specific customer needs to supplier capabilities.8 For intelligence analysis, leveraging AI to instantly pull otherwise hard-to-spot indications and warning (I&W) leads out of messy data could allow human analysts to do the higher-value work of determining if a given I&W lead represents a valid threat.
There are two main ways to create additional value with extra time: Analysts can spend more time on higher-value tasks that they already do, or they can add new high-value tasks.
Before these benefits can be realized, however, intelligence organizations must determine which are the highest-value tasks, and therefore, the best-suited for human workers to perform. To start, let’s compare humans with computers or other machines.
The key is in understanding the difference between specialized intelligence and general intelligence. Even a simple pocket calculator can outperform the best math whiz at some tasks. But while it is fast and accurate, arithmetic is the only task the pocket calculator can perform. It has a very narrow, specialized intelligence. Humans, on the other hand, tend to outperform even the most advanced computers in general intelligence. As MIT professor Thomas Malone explains, “Even a five-year-old child has more general intelligence than the most advanced computer programs today. A child can carry on a much more sensible conversation about a much wider range of topics than any computer program today, and operate more effectively in an unpredictable physical environment.”9
So while machines are better than humans at handling large volumes of data or working to extreme levels of precision, humans are better at tasks that change dramatically with context or those that involve high levels of interpersonal interaction. Teamed together, human workers and AI tools can each play to their strengths; AI tackles the huge volumes of data and humans deal with the highly variable tasks. Inside intelligence organizations, human analysts can move up the value chain by offloading many of their data-heavy processing- and exploitation-related tasks onto machines. They can then put more of their own energy into the analysis, planning, and direction tasks that often require more creativity, communication, and collaboration with colleagues and decision-makers.
Our model (see “Appendix: Methodology”) makes similar predictions for intel analysts. With AI taking on tasks such as data cleaning, labeling, or pattern recognition, all source analysts can spend more time on context-sensitive or uniquely human tasks. As a result, future analysts will likely spend more time collaborating with others—up to 58 percent more than they do today.
How could greater collaboration play out across the intel cycle? As an example, in the dissemination stage, analysts present information to decision-makers, collaborating with them so they can make the best decisions. If AI could take on much of the prep work in assembling sources, creating graphics, or even drafting reports, human analysts could focus on the needs of the decision-maker and the implications of the situation. In this scenario, an analyst would simply provide AI with the topic of an upcoming briefing or finished product. From there, AI could automatically generate a list of relevant reports to read through, preselect maps or imagery, label the relevant features for a briefing, and even write short summaries of background events.
A similar shift is already taking place in journalism. AI is being used to automatically generate simple news stories.10 In its first year, the Washington Post’s bot published 850 articles on everything from the Olympics to elections. By automating detail-oriented tasks such as writing corporate earnings reports, the AP found that use of bots reduced journalists’ workload by 20 percent, allowing them to focus on reducing errors and spotting larger trends.11 As a result, even as output increased, there were fewer errors in corporate earnings stories. Intel analysts could benefit from a similar arrangement: AI could generate routine intelligence summaries or daily reports, allowing analysts to focus on synthesizing those reports into larger trends or customizing reports to the preferences of specific decision-makers.
As we have all experienced, new technologies can come with new tasks. So AI most likely will also introduce entirely new tasks for workers to handle. Using the adoption of other advanced technologies as a guide, we expect many of the new tasks will likely fall into one of these three categories:
The fact that AI could require new tasks just to make sure it is operating correctly does highlight a potential danger: AI could eat up more time than it gives to analysts. And given that AI brings so much change, organizations adopting AI at scale will experience some level of friction. You simply cannot change the tasks of 20 percent of your workforce or add weeks’ worth of new tasks without straining staff, business processes, and existing tools. Intelligence organizations that want to get the best from AI need to recognize the pitfalls and find ways to mitigate them.
Perhaps the most significant pitfall is the possibility that, rather than creating new value, AI ends up monopolizing analysts’ time. Such situations have emerged before, such as with the health care industry’s implementation of electronic health records (EHR). While EHR promised to reduce health care professionals’ workloads, recent research has shown that EHR has, in fact, increased the amount of time it takes doctors to document patient visits.16 Doctors using EHR spend more time typing during patient visits, which reduces the amount of face-to-face time they have with patients. Overall, this drop in interaction has fed negative perceptions among both patients and doctors.17
Interestingly though, the EHR example can help intelligence organizations avoid this pitfall. While doctors spend more time documenting in EHR than with paper notes, nurses and clerical staff actually experience significant time savings in their tasks. So EHR causing doctors to spend more time is not necessarily a failure of the technology; rather, it reflects the strategic priorities of the organization, essentially shifting some billing and clerical workload from staff to physicians.18 If we are unhappy with the outcome, it is not the technology’s fault. Rather, it reflects a need to reevaluate the business and technical strategies that led to it.
If intelligence organizations are to avoid similar issues with the adoption of AI at scale, they must be clear about their priorities and how AI fits within their overall strategy. An organization focused on increasing productivity will pursue very different AI tools from one looking to improve the accuracy of analytical judgments. AI is not the solution to every problem, and having a clear vision about its value can help ensure it is applied to the right problems. Having clarity about the goals of an AI tool can also help leaders communicate their vision for AI to the workforce and alleviate feelings of mistrust or uncertainty about how the tools will be used.
Second, intelligence organizations should avoid investing in “empty technology,”—using AI without having access to the data it needs to be successful. AI is something like a flour mill: Without the grain to feed it, it is not going to produce much value. Even the most advanced AI tool will have limited utility if it lacks effective training data or sufficient input data. Without the right data, AI tools can still eat up time as analysts attempt to use them, but their outputs will be of limited utility. The result will be frustrated analysts who view AI as a waste of their limited time.
Analysts’ perceptions are critically important to the successful at-scale adoption of AI. Survey results suggest that analysts are most skeptical of AI, compared to technical staff, management, or executives.19 And as seen above, if the workforce does not see the value in a tool, it will be unlikely to use it.
To overcome this skepticism and get the most from AI, management will need to focus on educating the workforce and reconfiguring business processes to seamlessly integrate the tools into workflows. Without these steps, AI can just be a costly afterthought. For example, one federal agency implemented an AI pilot to generate leads for its investigators to follow up. However, the investigators were also simultaneously generating their own leads. With limited time for follow-up, the investigators naturally prioritized the leads they had come up with themselves and rarely used the leads generated by AI.20
Overcoming analysts’ initial doubts about a given AI tool comes down to creating trust between the analysts and the tool. Because they must stand behind their assessments even when powerful people may disagree, analysts harbor an understandable reluctance to put faith in something they cannot explain and defend. Having an interface that allowed the analyst to easily scan the data underpinning a simulated outcome, for example, or to view a representation of how the model came to its conclusion, would go a long way toward that analyst incorporating the technology as part and parcel of his or her workflow. This would allow for much more reliable, trusted data, and would yield more reliable analysis being presented to war fighters and decision-makers.
While having a workforce that lacks confidence in AI’s outputs can be a problem, the opposite may also turn out to be a critical challenge. For many decades, intelligence leaders have been aware of the phenomenon where adding data to an analyst’s judgments increases the analyst’s confidence that they are right without actually improving the work’s overall accuracy.21 In other words, more data played into analysts’ confirmation bias—they used the new evidence to support their preconceived conclusions instead of helping create more accurate analysis.
The psychology experiments that lie at the heart of that observation were done using two to five times additional data. AI would make orders of magnitude more data available to analysts, possibly exacerbating analysts’ confirmation bias. For example, in the financial services industry, early experience shows that AI can provide analysts with roughly 30 times the amount of data available today.22 It is simply unknown how human cognition will respond to such an unprecedented volume of data. Analysts could become less confident in AI judgments due to information overload. Or, conversely, with so much data at their disposal, analysts could become overconfident, implicitly trusting the AI. The latter could be especially dangerous: Many aviation accidents have shown that mismatch between human trust in automation and human understanding and supervision of it can lead to tragedies.23
Conversely, there are promising ways in which AI could actually help analysts combat confirmation bias and other human cognitive limitations. For instance, AI could be given tasks that help check the validity of assessments that humans struggle to find time for or are burdensome to do manually. Machines would be very good at continuously conducting key assumptions checks, analyses of competing hypotheses, and quality of information checks.24 Senior analytic managers could also leverage AI to alert them to mismatches between evidence coming in and their teams’ assessments, giving them an opportunity to direct analytic line reviews and focus their attention on problem areas.
In the end, the impact AI may have on the cognitive biases of analysts is simply not known. Leaders need to pay careful attention to analysts’ concerns, evaluate business process design, and continuously monitor AI performance to help prevent any potential pitfalls.
The greatest benefits of AI will be achieved when, like electrification, it is embedded into every aspect of an organization’s operation and strategy.25 For all the game-changing benefits that AI can bring at scale, or the organization-shaking pitfalls, the immediate steps to getting started can be surprisingly familiar.
Across a government agency or organization, successful adoption at scale would require leaders to harmonize strategy, organizational culture, and business processes. If any of those efforts are misaligned, AI tools could be rejected or could fail to create the desired value. Leaders need to be upfront about their goals for AI projects, ensure those goals support overall strategy, and pass that guidance on to technology designers and managers to ensure it is worked into the tools and business processes.
Establishing a clear AI strategy can also help organizations apply AI to tackle a variety of problems, from mission-facing to back-office. Such a strategy can frame decisions about what infrastructure and partners are necessary to access the right AI tools for an organization. With 83 percent of enterprise AI in the cloud, organizations can find it easier to develop AI tools in-house, purchase from external vendors, or even find an existing solution already in use elsewhere in the cloud.26
At a division or team level, the first steps shift from strategic alignment to analyst adoption. Tackling some of the significant nonanalytical challenges analyst teams face could be a palatable way to introduce AI to analysts and build their confidence in it. Today, analysts are inundated with a variety of tasks, each of which demands different skills, background knowledge, and the ability to communicate with decision-makers. For any manager, assigning these tasks across a team of analysts without overloading any one individual or delaying key products can be daunting. AI could help pair the right analyst to the right task so that analysts can work to their strengths more often, allowing work to get done better and more quickly than before.
Similarly, AI could help managers evaluate performance and screen job applicants for aptitude in a particular skill or even identify all-around stars, much like Special Operations Command is exploring with Marine Raider applicants.27 The benefit to these nonanalytical uses of AI is that when analysts see AI aid them in their work, rather than competing with them, they would likely become more comfortable working with AI as it moves into more analytical tasks.
AI is not coming to intelligence work; it is already here. But the long-term success of AI in the IC depends as much on how the workforce is prepared to receive and use it as any of the 1s and 0s that make it work.
Our analysis began with Department of Labor O*NET data for the intelligence analysis occupation. However, since the O*NET data for intelligence analysis is based on very few survey responses, we supplemented with similar occupations, such as investigative police officer, to create a list of detailed work activities or “tasks,” that could accurately represent the breadth of intel analysis.
In developing our model, we then broke the tasks into two archetypes to reflect some of the diversity in the type of work intelligence analysts can perform. For each archetype we assigned tasks to different stages of the intel cycle and included rough levels of effort for each task. Next, we calculated the automation potential for each task using the same algorithm from our previous research into the impact of AI on government.28 The calculation considers various factors, including how much social intelligence, creative intelligence, and perception or manipulation each task requires to estimate how automatable the task is.
Tasks that are more amenable to automation will feature time savings, while tasks less suitable to automation may actually see time gains as analysts are able to spend more time on these activities (figure 4). For even more background on our approach, see the analysis from our original report.
How much time AI may save on a particular task is a function of how much interpersonal interaction, creativity, or manual dexterity a task requires. While both tasks involve collaboration, the focus of one is on sharing information, a highly automatable activity, while the focus of the other is on working with others, a less automatable task that requires significant interpersonal interaction.