Four futures of generative AI in the enterprise: Scenario planning for strategic resilience and adaptability

When it comes to generative AI strategies, the stakes are high—and so is the uncertainty. These three-year scenarios can help organizations plan their gen AI strategies.

Laura Shact

United States

Brad Kreit

United States

Gregory Vert

United States

Jonathan Holdowsky

United States

Natasha Buckley

United States

Brenna Sniderman

United States

Organizations are making big bets on generative AI.

Nearly 80% of business and IT leaders expect gen AI to drive significant transformation in their industries in the next three years.1 Global private investments in gen AI have skyrocketed, increasing from roughly US$3 billion in 2022 to US$25 billion in 2023.2 And that pace continues unabated with some US$40 billion in global enterprise investments projected in 2024 and more than US$150 billion by 2027.3 These efforts toward transformation may take on added importance as economists anticipate that labor force participation will decline in the coming years as the population ages, suggesting a need to boost productivity.4 In a world where some forecasts suggest dramatic advances like artificial general intelligence (AGI) may be possible by the end of this decade,5 or a digital twin attending meetings on your behalf within five years,6 the possibilities for gen AI seem limited only by one’s imagination.

It’s a world where most don’t want to be left behind: Online references to FOMO—the “fear of missing out”—have increased more than 60% in the past five years.7 Though not specific to gen AI alone, the sentiment captures the reality that uncertainty underlies any bet, and the level of uncertainty regarding gen AI’s impact on the enterprise is significant.8 Predictions for growth and opportunity highlight one possible future, but a future where AI advances slow down and organizations encounter significant financial barriers to scaling and growing their AI capabilities is also possible.

How can organizational leaders wrap their minds around the future of AI in the enterprise and develop the best strategies to meet whatever comes?

To explore this question, Deloitte’s Center for Integrated Research conducted a foresight analysis of gen AI’s critical uncertainties pertaining to the enterprise, drawing from quantitative surveys, subject matter specialist interviews, and a proprietary horizon scanning initiative designed to capture near- to medium-term trends in gen AI (methodology). Using this analysis, we then developed four scenarios of plausible futures depicting the possible evolution of gen AI and its potential impact on the enterprise between now and the end of 2027. Each scenario presents a different world in which gen AI and the enterprise coexist and coevolve, with different implications for enterprise strategy, policy, and practice.

By using this scenario planning to inform uncertain market dynamics, organizations can pressure-test their strategies under multiple possible futures to strengthen their investments. These scenarios are not meant to tell leaders what will happen or what should be done, but rather to challenge and inspire organizations to think about how to anticipate and manage the risk and uncertainty surrounding gen AI. They are designed to help leaders grasp the critical long-term issues that may shape their organizations and to equip them with more agile, forward-looking strategies. This is not a comprehensive view of all different types of scenarios that could emerge, but a focus on a specific, unique set of scenarios that will likely be relevant for enterprises as leaders continue to build their gen AI strategies. By exploring how current strategies and investments in gen AI could unfold over time in different contexts, organizations can uncover hidden risks and opportunities and make smarter strategic choices today.

Building the scenarios: Defining gen AI’s critical uncertainties

Our research employs a method known as the “axes of uncertainty,” an approach to scenario development that relies on a combination of two highly uncertain and two high-impact variables to define four possible futures.9 While there are many questions about the cost, regulation, and continued advancement of gen AI, our analysis is focused on long-term implementation outcomes. Based on qualitative interviews with AI subject matter specialists at Deloitte and input from multiple ongoing global enterprise surveys, the following two variables were identified:

  • Stakeholder outcomes (x-axis): What are the economic, social, and workplace consequences of the rollout of gen AI? Will the continued rapid rollout of gen AI drive overall shared growth for business and society or unequal growth?

As organizations move rapidly to implement and scale gen AI tools, there is simultaneously considerable excitement about expanding access to these tools to increase innovation and equal, if not greater, levels of fear around the potential impacts on job loss and increased inequality. Understanding these trajectories can help in making sense of longer-term growth opportunities as well as risks related to stakeholder trust, employee skill decay, looming regulatory impacts, and more.

  • Achieving performance objectives (y-axis): Will organizations be able to realize financial and operating performance improvements through their gen AI initiatives, or will gen AI outputs not only fail to meet return-on-investment expectations but also contribute to organizational confusion and performance decay?

Despite the excitement around gen AI, many organizations have struggled to identify and/or scale clear, high-value use cases that align with critical business goals. As organizations look to scale their efforts around gen AI, they may face a variety of challenges related to how to best integrate these tools into everyday work processes. Understanding these uncertainties can help make sense of the services, safeguards, and strategies that may need to be adopted to ensure that gen AI tools add value to the enterprise.

Each scenario based on these variables (figure 1) explores a possible future and examines supporting evidence, including data points, specialist perspectives, and early indicators of possible change. Each scenario also highlights implications for present-day strategies.

Exploring the scenarios

Scenario 1

 

In just a few years, large language models (LLMs) have grown increasingly sophisticated in their abilities to handle complex prompts and produce contextually relevant output. Not only have the tools themselves progressed, they have also been embedded into smart devices, wearable devices, and industrial equipment to create a layer of automated intelligence across the organization. Some workers find this to be increasingly invasive, but it helps drive enterprise value. Many organizations have begun to turn these intelligent systems into digital agents that can automate complex and open-ended personal and business tasks with humans or with other agents.

While technical advances show little sign of slowing, the impacts of gen AI on businesses are mixed. Across most industries, organizations that invested early and aggressively in transformation gained efficiencies and new product capabilities that have translated into pricing advantages and market growth. This has given early adopters an enduring advantage over competitors, while organizations that did not invest in gen AI early on are struggling to catch up. But the biggest winners are the small number of hyperscalers and semiconductor companies whose foundational models and technology infrastructure are driving economic growth.

While users love these tools, their improving capabilities make many operation-oriented roles increasingly unnecessary. Unemployment is on the rise, and while economists disagree about the role of gen AI in contributing to this rise, workers (both employed and unemployed) increasingly blame AI for job losses as well as underlying feelings of fear and financial uncertainty.

These factors have not only reduced confidence in the economy but contributed to political polarization and further erosions of trust in large institutions—and this lost trust is costly. Research indicates organizations risk losing 20% to 56% of their market cap following a negative trust-related event.10

The growing use of gen AI in frequent digital attacks by malicious global actors that are empowered by the technology’s increasing sophistication is worsening the shifts in public opinion. Ironically, this erosion of trust is creating new demands for more personalized customer service and human interaction.

Impacts: Opportunities and risks in a “growth with cost” future

People

Opportunities

  • Most workers appreciate the increasing sophistication of gen AI capabilities, freeing their time to focus on more fulfilling and purposeful tasks.
  • Organizations that strengthen their employee value propositions by ensuring that their workers benefit from gen AI (e.g., as a productivity aid or creative collaborator) can gain long-term advantage over those that do not.

Risks

  • Career paths for younger generations shift to jobs and tasks less affected by gen AI, such as specific types of trades. This puts organizations at risk of losing access to younger workers who may have more fluency in emerging technologies but don’t see viable entry-level career paths.
  • Organizations that rushed to achieve efficiencies are scrambling to earn back trust with the existing workforce, including top talent, who are now dependent on gen AI tools to accomplish what their colleagues used to accomplish. 
  • Organizations can miss opportunities to enhance innovation as workers feel less agency.

Technology

Opportunities

  • Organizations that integrated gen AI into industrial equipment, robotic systems, and other enterprise technologies gain new business capabilities and line-of-business growth opportunities that they would not have by simply offering a stand-alone tool.11
  • Keeping humans at the helm of human and AI agent relationships (“controlling the loop” rather than simply being “in the loop”) for quality control helps reduce error rates that can compound over time.

Risks

  • Continued growth and investment in gen AI could lead to the proliferation of new alternative models, potentially leading to obsolescence of existing architectures. 
  • Gen AI has taken over software design, code development, and customer interactions, creating more opaque solutions that are increasingly difficult for humans to understand and manage. 
  • Tech ecosystem complexity can balloon as issues like vendor lock-in, intellectual property management, and regulatory pressures may escalate.

Culture

Opportunities

  • Organizations that have a strong culture of continuous learning, upskilling, and career growth are likely better able to adapt to the changing landscape while minimizing impacts to worker morale.
  • Organizations where leaders focus on strengthening trust and using gen AI as part of a larger innovation and transformation strategy can keep workers inspired rather than fearful of technical advances. 
  • Organizations increasingly turn to diversity, equity, and inclusion (DEI) and wellness programs to help worker morale.

Risks

  • Waves of restructuring across industries can have an impact on corporate cultures and cause worker turnover, especially in companies that have focused solely on cutting costs.
  • Value beyond efficiency could be missed, resulting in significant opportunity costs. Organizations can struggle to innovate given the emphasis on cost-cutting.
Show more

How do we know this scenario is plausible by 2027?

  • The gap between leading gen AI adopters and laggards is growing. Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey found that 73% of surveyed organizations reporting high levels of expertise were adopting the tool fast or very fast, while only 40% of organizations reporting some level of expertise said the same.12
  • Trust is already waning. Edelman’s Trust Barometer shows the percentage who trust AI companies globally fell from 62% in 2019 to 54% in 2024.13
  • Younger generations recognize impending change. In a recent workforce study, 67% of early career workers surveyed say they will likely change their career path in the next two to five years given the impact of AI on their current job, compared to 46% of tenured workers.14

Strategies: What steps should you consider to prepare for this scenario?

  • Pair investments in gen AI with efforts to strengthen organizational culture and trust at the beginning of your gen AI journey. In this scenario, many organizations focus primarily on cost-cutting efforts that may provide short-term efficiencies at the expense of employee trust and engagement in work. By strengthening trust as part of a gen AI strategy, organizations can reduce these impacts on morale while setting the organization up for stronger growth over time. Embedding greater transparency into an organization’s gen AI strategy can help foster trust. This may include assigning responsibility for the output of specific gen AI and AI agent capabilities.
  • Build AI fluency and education at all levels to help democratize access to AI capabilities. This can help workers self-select if they want to be part of the organization’s AI-enabled future.
  • Cultivate and support grassroots gen AI use-case development. Design programs, such as “promptathons,” that promote and celebrate gen AI adoption across functions. An organization can reinforce its commitment to capitalizing on gen AI while tapping into local workforce enthusiasts who can identify valuable use cases as well as issues that restrict scaling. 
  • Emphasize and provide incentives to enhance organizational creativity. Intentionally focus on new ways to spur innovation, reward outside-the-box thinking and enhance diversity of thought (for example, by tapping into ongoing diversity, equity, and inclusion initiatives and neuroinclusion programs).15
  • Examine workforce planning strategies at various levels of the organization. Reinvent the employee value proposition and revisit how your organization attracts and retains talent considering how workers at different levels and stages of their career may be affected by gen AI. In particular, organizations may be at risk of reducing entry-level opportunities for the early career workforce even though these workers may be best positioned to identify opportunities presented by gen AI tools. To avoid this potential pitfall, explicitly define entry-level talent strategies including what skills workers will develop and what experiences and expertise will be required to advance within the organization.
  • Determine how leadership should communicate anticipated changes to reinforce its vision and enhance employee trust. To inspire trust, leadership should reaffirm the organization’s commitment to the technology, articulate how it supports the company’s purpose, and evangelize its benefits to employees. At the same time, leaders should be transparent in communicating potential risks that accompany widespread gen AI adoption and how they plan to respond.

Scenario 2
 

As expectations for gen AI dramatically outpace real-world capabilities, the sense of frustration is palpable in this scenario. Despite high hopes and huge investments, businesses have struggled to reap many of the benefits they had anticipated from gen AI, causing a downturn in the tech industry and the wider market. Part of the challenge has been a quirk in the ways in which gen AI tools have improved. While the aesthetics of machine-generated output—including text, video, imagery, and more—have continued to improve rapidly, hallucinations and other accuracy errors have proven to be much more intractable. As organizations rapidly deploy and scale gen AI tools, this dynamic leads to confusion as decision-makers get inundated with data and analysis that appears to be high quality but often contains subtle errors and misinformation. These challenges are compounded in organizations that moved too aggressively to reduce headcount ahead of expected efficiency gains that fail to materialize, leading to increased unemployment and a workforce that is stretched thin in many organizations.

In addition, biases from training data make their way into the output and analyses of these systems, perpetuating inequalities across global business and society. At the same time, bad actors are able to take advantage of gen AI’s capabilities to increase disinformation and cyberattacks. Taken together, these factors contribute to a growing sense of mistrust not just in gen AI but also in the larger information environment.

Despite these difficulties, not everything has been negative. As organizations have recalibrated their expectations for gen AI, they have begun identifying clear use cases where the tools are not only improving efficiencies but also contributing to top-line growth and innovation. Ironically, organizations that took measured approaches to AI implementation are better positioned to succeed in this future by committing to greater collaboration, particularly with customers and workers, as the gen AI transition unfolds.

Impacts: Opportunities and risks in a “bubble bursts” future

People

Opportunities

  • Talent flocks to companies and sectors that did not rush to make significant changes. These companies and sectors may be less in need of rapid course corrections.

Risks

  • Organizations that moved quickly to use gen AI to reduce costs are at risk of increased anxiety, reduced satisfaction, and increased loneliness among the workforce.
  • Organizations can face substantial challenges related to skill erosion if they focused on lowering costs at the expense of investing in opportunities to increase innovation. These same companies may have trouble attracting new talent as a result.

Technology

Opportunities

  • Organizations that embraced a slower rollout of gen AI solutions are better enabled to ensure data management, platform integrity, and cybersecurity technology capabilities are in place. This allows organizations to mitigate external risks, ensure data quality, and enable long-term infrastructure resilience while managing more deliberate rollouts of gen AI.
  • Organizations can gain productivity advantages by buying technology and software that has gen AI already integrated into it. 
  • Organizations may leverage small language models that target specific functional use cases rather than frontier LLMs, thus saving costs and reducing negative environmental impacts of resource-intensive larger models.

Risks

  • The capabilities of bad actors can advance quickly, forcing organizations to increasingly invest in cyber protection measures.16 Regulatory uncertainties can create IP management questions.
  • Rapid adoption of gen AI in the short term (e.g., via adoption of chatbots, image and code generation) can undermine overall software quality and introduces new levels of technical debt, complexity, and opacity. This may raise doubts about the ability to improve AI outcomes over the long term.17
  • Too much synthetic data used in training LLMs may cause model performance to degrade over time, adding more stress on workers to compensate where gen AI is not meeting expectations.18

Culture

Opportunities

  • Organizations with measured approaches to technology adoption—including cultures focused on careful quality control—could gain the most from adopting gen AI tools by finding high-value use cases and minimizing missteps.

Risks

  • Biases in training data can perpetuate inequalities, undermining the effectiveness of DEI programs.
  • Organizations that are focused on the most powerful technology due to FOMO could be particularly hard hit as they focus on the “sizzle” rather than the most useful or purpose-built tools to advance organizational goals.

Show more

How do we know this scenario is plausible by 2027?

  • Gen AI quality is already a significant concern. More than one-third of surveyed executives in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey selected “lack of confidence in results” as one of their top three risks related to gen AI tools and applications.19
  • Many companies have yet to track gen AI’s ROI. Only 35% of respondents to Deloitte’s third quarter State of Generative AI in the Enterprise survey indicate they are tracking ROI to measure and communicate value from their gen AI initiatives.20 Furthermore, early feedback from industry analysts suggests that these tools may not be worth the cost.21
  • Organizations recognize that AI bias is a risk as they scale. Multiple independent tests of gen AI output in interview screening suggest perpetuating biases at scale are a significant risk.22
  • Adoption can take longer than expected. Prior experience suggests progress leveraging new technologies is not always linear. Technologies like virtual reality and autonomous cars showed a lot of potential early in their development but have become much more difficult to adopt and commercialize than expected.
  • Trust in the information environment continues to erode. The ability to trust what is seen and heard has been continually eroded. Deepfakes—synthetically altered or generated images—first appeared in late 2017. Initially, it was not easy to create these fake images, and their quality was poor. But advances in AI have vastly improved outputs in recent years, often delivering photorealistic results. Tests have shown that many people have a tough time distinguishing real faces from those that are AI-generated. What’s more, many respond more positively to the generated images.23

Strategies: What steps should you consider to prepare for this scenario? 

  • Focus gen AI investment on the biggest business problems to solve. Technology investments are likely to underperform and damage an organization’s culture and morale when organizations expect benefits to materialize faster than realistically possible. Organizations that carefully match their gen AI investments to underlying business needs may be better positioned to mitigate those risks in this future.
  • Avoid skill and knowledge erosion. In this scenario, organizations put their long-term futures at risk by underinvesting in human skills and capabilities, as well as by losing long-term institutional knowledge and culture. Developing comprehensive plans to avoid these kinds of losses—especially if gen AI investments take longer to deliver than hoped—can be critical to the long-term resilience of gen AI efforts.

Scenario 3
 

Being first is less of an advantage in this scenario than being strategic. Over the course of three years, almost every promising technical advancement related to gen AI has been accompanied by a half step backward for industry as organizations struggle to keep pace. The race to integrate AI—and the fear of falling behind competitors—has pushed many organizations to adopt and scale tools before they are fully tested, leading to confusion about gen AI’s strengths and limitations. As a result, productivity gains that appeared in early pilots and studies prove difficult to re-create at scale, leaving many organizations frustrated by modest improvements relative to the scale of investment.

These scaling challenges stem from the ways in which the very tools that empower individual workers contribute to a larger environment of fractured attention. New capabilities like instantly turning a handful of bullet points into a presentation or sending a virtual avatar to a lower-priority meeting enable individual workers to supercharge their output, but they also contribute to a culture of productivity theater where many workers feel pressure to produce a higher volume of work to prove their value. As this dynamic plays out, it contributes to a noisier and more confusing information environment for decision-makers and managers.

Organizations also struggle to manage the mismatch between the rapid pace of technological change and the slower speed of regulation and lawmaking. Over the course of three years, this regulatory uncertainty limits some seemingly promising use cases for gen AI, such as customer service bots, due to unclear risks and liabilities. On the other hand, slower adoption also mitigates potential job loss. By the summer of 2027, the winning organizations are those that have paired their technology investments in AI with similarly large-scale efforts in process improvements and work redesign that focus on connecting the use of gen AI to clear outcomes and goals. Even as some of the hype around gen AI has receded, this approach to pairing human and machine capabilities shows increasing promise to create opportunities from ongoing advances with LLMs and other models such as small language models.24

Impacts: Opportunities and risks in an “advancement depends on humans” future

People

Opportunities

  • Not all improvements in individual task efficiency add up to larger organizational gains. Organizations that pair efforts to improve individual efficiency with process design may gain advantages over those that take more siloed approaches.

Risks

  • Using gen AI tools to give the appearance of busyness can undermine the potential value of workers in many organizations, driving alienation from the firm’s core mission.
  • A lack of holistic and integrated performance measurements could undermine the benefits of gen AI investments. Key performance indicator improvements—without an accompanying strategy to integrate across those measures—can yield limited ROI at an organizational level. 
  • Humans are still needed to catch errors and hallucinations and spend increasingly large amounts of time on quality control of machine-generated output rather than on more engaging and creative tasks.

Technology

Opportunities

  • Viewing gen AI advances as a set of new tools for professional growth rather than stand-alone automation solutions can optimize value for the workforce.

Risks

  • A spike in quality-related issues (in code/application development, data and modeling, cybersecurity, etc.), along with advancements in alternative AI technologies (e.g., world models25), often cause organizations to either slow down their gen AI programs or reevaluate their business and technology strategies.
  • Technology budgets, IT teams, and broader data and platform modernization efforts cannot keep pace with the demand from the business to integrate gen AI into digital products.
  • Regulatory uncertainty related to data and intellectual property, among other areas, can limit the ability to apply AI tools to potential high-value use cases.

Culture

Opportunities

  • Organizations that focus on pairing technology investments with a clear sense of intended outcomes and key metrics may gain advantage. These organizations focus on developing new ways of measuring work activity that can help ensure that worker activity—including the use of gen AI tools—is aligned to key business outcomes.

Risks

  • Many organizations roll out technology tools before they are ready to use them, fearing they will miss out on advances in gen AI.
Show more

Signals: How do we know this scenario is plausible by 2027?

  • The potential financial risks are already becoming evident. Companies are already being held financially liable for mistakes made by gen AI errors, such as mistakes like providing incorrect pricing information to customers.26
  • Scaling may be proving elusive. In Deloitte’s third quarter 2024 State of Generative AI in the Enterprise survey, nearly 70% of surveyed respondents said their organization had moved only 30% or fewer of their gen AI experiments into production.27
  • Data leakage resulting from “shadow AI” usage is likely increasing. Recent studies indicate that a significant percentage of gen AI usage at work is occurring on personal accounts.28

Strategies: What steps should you consider to prepare for this scenario? 

  • Create holistic assessments of gen AI efforts. Identify feedback mechanisms to better understand the successes and challenges workers face as they learn to work with gen AI. As part of this effort, it will be important to determine how worker performance criteria should be adjusted to reflect new operating processes and norms. Adjusted criteria should then be connected to critical business objectives rather than easy-to-measure outputs. Maintaining this holistic view may also require keeping humans at the helm rather than simply in the loop to ensure accuracy and alignment with larger objectives.
  • Ensure assessments are transparent throughout the organization to build worker trust. This also means being transparent about who in the organization is responsible for specific AI agent output and ensuring that those who are responsible for AI agents have the time and resources to oversee their output.
  • Develop gen AI’s future in your organization with a “both and” vision. For gen AI to support enterprise growth, it should be positioned as both a core capability and an integrated component alongside other capabilities to drive business opportunities and growth.
  • Adopt a “less is more” mindset. Focus on validated and strategic use cases rather than pushing for widespread adoption as quickly as possible.
  • Evaluate the emerging and shifting regulatory environment. Understand and monitor evolving regulatory impacts across geographic regions, including how conflicting regulatory frameworks may affect domestic and global cross-border business activity.

Scenario 4
 

By 2027, one thing is clear: The transformation is just beginning. Over the course of just three years, advances in gen AI have diffused across industries and are driving a new wave of creative innovation across a variety of fields. Organizations have found novel ways to combine the productive capabilities of gen AI tools with other scientific and technological advances—from robotics to biology to other branches of machine learning—unleashing a wave of innovation and growth that spans across the economy. This collaborative innovation has enabled domain specialists across scientific, technical, and business disciplines to benefit from the advances of gen AI.

This broad-based growth has been driven by a combination of continued technical advancement, including continuous improvement in LLMs and growth in targeted small language models, coupled with human ingenuity. Early gen AI rollouts successfully freed up time and attention across many organizations, enabling the workforce to focus on more complex, creative, and high-value work. As this takes place, organizations and their employees discover the value of emotional intelligence, critical thinking, imagination, and other human capabilities that cannot easily be replicated by AI.

This increased focus on human capabilities and skills helps organizations adjust work processes and systems to take advantage of the benefits of gen AI tools while minimizing issues related to quality and reliability. Efforts to customize and fine-tune models enable organizations to reduce hallucination and error rates while simultaneously ensuring that the output of gen AI tools is tailored to the nuanced needs of different functions.

This allows for significant gains in productivity, transformation, and top-line growth without significant impacts on unemployment. While workers and organizations have broadly benefitted from these gains, the pace of change and demands for adaptation have been relentless. Many workers and business leaders have had to reinvent how they work and develop a new understanding of their value, particularly in roles such as marketing and software engineering where gen AI has progressed the fastest.

In addition to managing these workforce adaptation challenges, organizations face increasing security and information challenges as bad actors use sophisticated, multimodal deepfakes to attack organizations that underinvest in cybersecurity, requiring constant updates to security protocol. By late 2027, the need to adapt and integrate gen AI has only accelerated. Much like the dot-com boom of the late 1990s was the start of a much larger transformation, the innovations driven by advances in gen AI appear to be the beginning of a much more profound set of changes.

Impacts: Opportunities and risks in an “All systems go” future

People

Opportunities

  • Organizations accelerate worker learning and upskilling through AI coaching. By training bots on enterprise data, organizations can create customized AI coaching tools that provide real-time support and feedback to enable workers to develop faster.  
  • Organizations that can focus more of their workforce on new innovations, experiments, and product and service development could grow faster and more durably than those focused primarily on efficiencies.
  • As the pace of change accelerates, it becomes even more important to not only plan for full-time worker needs but also plan for skills, tasks, external workers and technology acquisition in an integrated way. Organizations that move to this more continuous and holistic approach to workforce planning may be able to more quickly adapt to the changing environment than competitors.

Risks

  • Employee wellness programs need to grow to account for an acceleration in already high mental health challenges for workers whose roles and skill requirements are shifting amid significant organizational change.
  • Even as benefits from gen AI are shared widely, many workers feel a loss of identity as their work shifts away from producing work and more toward managing the output of intelligent systems. This could be particularly true for more tenured professionals who see skills they have spent years honing affected by AI.
  • Early career workers feel that career choices are not as wide and improvisational as they once were, as gen AI imposes more of a technical gloss on what once were considered the “creative” fields.

Technology

Opportunities

  • Companies that go all in with investments in gen AI to drive their digital product and software innovation strategies can gain speed to market, productivity, and innovation advantages over those that view gen AI as a stand-alone solution. 
  • Early gen AI investors have likely also invested early in securing the limited number of cloud servers available and have cornered the market on scalable innovation. They can own the technology innovation ecosystem, with the potential to resell unused computing capacity and private models.

Risks

  • Multi-agent solutions become too complex to manage across the technology ecosystem, creating the potential for massive technical debt that slows down technology development over time.
  • The proliferation of technology includes alternatives to generative models, which can create a risk of obsolescence for recent gen AI investments.
  • Cyber risks and business continuity challenges can grow with the increased proliferation of combined technologies, exposing organizations to new adversarial attacks and business continuity risks.

Culture

Opportunities

  • Experiences remain critical to strengthening organizational culture and ensuring that workers feel a sense of belonging and alignment with an organization’s goals and mission. Organizations that can identify these key experiences—ranging from on-the-job training and apprenticeship to onboarding new workers—may be able to build stronger cultures that lead toward increased trust and innovation over time.

Risks

  • The use of gen AI and its ever-increasing power to search and extract undermines the patent process by making almost anything “obvious”—and therefore, non-patentable—with longer-term implications for human research and innovation.30 Over time, this could undermine the sustainability of an innovative culture.
Show more

Signals: How do we know this scenario is plausible by 2027?

  • Advancement in AI capabilities through natural language and proliferation of voice commands is increasing. Advances in gen AI are powering the robotic foundation models that enable individuals to communicate with machines using natural language. These robots accomplish tasks such as warehouse picking that would be nearly impossible with traditional simulation training methods.31
  • Companies are already expressing need for “human” skills as AI matures. More than half of executives and IT leaders surveyed in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey expect human-centered skills including creativity and emotional intelligence to be more valuable across the organization because of the adoption of gen AI tools and capabilities.32

Strategies: What steps should you consider to prepare for this scenario? 

  • Invest in holistic learning and development efforts. As advances from gen AI accelerate, workers at different levels of experience not only will likely see their roles shift but also could be challenged to rapidly learn new skills and adopt new ways of thinking about their work. As part of this effort, organizations can share the company’s vision for AI and how it can improve organizational and worker success. Preparing the organization for business process and cultural change will be important to ensuring that the workforce can successfully adapt.
  • Continuously plan for workforce needs. As gen AI presents profound transformational opportunities, organizations may benefit from moving beyond episodic planning to more continuous workforce planning based on rapid shifts in the marketplace. As part of this effort, organizations can identify learning and development opportunities to help strengthen how workers collaborate with AI agents.
  • Focus on performance management transparency to maintain workforce trust. Continuous growth for the organization will depend on workers maintaining trust in the company’s gen AI strategy. Clearly specify what is expected of workers and what they will be able to rely on AI agents to accomplish.
  • Capitalize on performance gains by increasing stakeholder value. Assess how the organization will fairly distribute the benefits that gen AI provides to all stakeholders, particularly workers. In addition to financial benefits, organizations should also identify opportunities to improve the human sustainability impacts of gen AI on worker morale and wellness.

What’s next: How to use these scenarios

To use these scenarios to pressure-test your own gen AI strategies and make them more resilient, consider the following questions:

  • What strategies discussed thus far would benefit your organization across multiple scenarios? For example, embedding greater transparency into initiatives to engender worker trust, focusing on validated and strategic use cases that solve the biggest business problems rather than pushing for widespread adoption as quickly as possible, and cultivating an ethos of continuous learning to fight skill and knowledge erosion are just a few strategies that support the organization’s culture and operating performance regardless of how the technology and its impact on business and society evolves.
  • What gen AI projects are you currently working on that would be likely to succeed in each of the scenarios presented here? Whether they are small-scale pilots or larger efforts, these initiatives may be opportunities to accelerate investment and effort.
  • Where are your strategies vulnerable in light of the different scenarios? Investments that are well positioned for one or two scenarios may not produce the anticipated ROI if the future plays out differently from what your current plans assume. As you identify these potential gaps and weaknesses, consider how you might mitigate these risks.

These scenarios are not predictions of a specific outcome but designed to help organizations develop gen AI strategies under multiple conditions of uncertainty. They represent an opportunity for executives to evolve their mindset and lead with greater confidence in an environment where the stakes are high—and so is the uncertainty.

Methodology

The findings in this report were developed using a combination of qualitative and quantitative data sources drawing heavily from Deloitte’s global 2024 State of Generative AI in the Enterprise quarterly survey and interview series, which monitors enterprise adoption trends across industries from the perspectives of executive and IT leaders. To develop insight into emerging phenomena and innovations, Deloitte’s Center for Integrated Research has been conducting an ongoing horizon-scanning initiative since January 2024 related to gen AI and the future of the enterprise. We used this combination of inputs to inform a discrete analysis of critical uncertainties using the axes of uncertainty scenario planning method, which has been in use for more than 50 years as a tool to help decision-makers develop more resilient plans under conditions of high uncertainty.33 To further validate the scenarios, we conducted a series of interviews with Deloitte gen AI leaders, sector specialists, and foresight analysts.

Show more

By

Laura Shact

United States

Brad Kreit

United States

Gregory Vert

United States

Jonathan Holdowsky

United States

Natasha Buckley

United States

Endnotes

  1. Deborshi Dutt et al., State of Generative AI in the Enterprise: Quarter one report, Deloitte, January 2024, p. 7.

    View in Article
  2. Nestor Maslej, “Inside the new AI Index: Expensive new models, targeted investments, and more,” Stanford University Human-Centered Artificial Intelligence, April 15, 2024.

    View in Article
  3. Karen Massey, “What is GenAI spending by industry expected to be in 2024?” IDC, January 2024; Hayden Field and MacKenzie Sigalos, “AI craze is distorting VC market, as tech giants like Microsoft and Amazon pour in billions of dollars,” CNBC, September 6, 2024.

    View in Article
  4. US Department of Labor Bureau of Labor Statistics, “Employment projections — 2023–2033,” press release, August 29, 2024.

    View in Article
  5. Zoë Corbyn, “AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’,” The Guardian, June 29, 2024.

    View in Article
  6. Nilay Patel, “The CEO of Zoom wants AI clones in meetings,” The Verge, June 3, 2024.

    View in Article
  7. Deloitte analysis using Google Trends.

    View in Article
  8. Tal Roded, “Putting the economic impact of GenAI into scale,” MIT FutureTech, June 20, 2024.

    View in Article
  9. Diana Scearce, Katherine Fulton, and the Global Business Network community, What if? The art of scenario thinking for nonprofits, Global Business Network, a member of Monitor Group, 2004.

    View in Article
  10. Jennifer Lee, Nick Galletto, and Praveck Geeanpersadh, “The chemistry of trust: Part 1: The future of trust,” Deloitte, 2021, p. 4.

    View in Article
  11. Emma Roth, “Boston Dynamics turned its robot dog into a talking tour guide with ChatGPT,” The Verge, October 26, 2023.

    View in Article
  12. Nitin Mittal et al., State of Generative AI in the Enterprise: Quarter two report, Deloitte, April 2024, p. 10.

    View in Article
  13. Gary Grossman, “Rebuilding trust to reach AI’s potential,” Edelman, March 28, 2024.

    View in Article
  14. Elizabeth Lascaze et al., “Will AI reshape or erase their careers? The early career perspective,” Deloitte Insights, forthcoming.

    View in Article
  15. Deborah Golden et al., “The neurodiversity advantage: How neuroinclusion can unleash innovation and create competitive edge,” Deloitte Insights, July 12, 2024.

    View in Article
  16. Joseph Cox, “Inside the underground site where ‘neural networks’ churn out fake IDs,” 404 Media, February 5, 2024; Lynn Greiner, “Criminals, too, see productivity gains from AI,” CSO Online, June 12, 2024.

    View in Article
  17. Ilia Shumailov et al., “The curse of recursion: Training on generated data makes models forget,” ARXIV, April 14, 2024.

    View in Article
  18. Aatish Bhatia, “When A.I.’s output is a threat to A.I. itself,” New York TimesAugust 25, 2024.

    View in Article
  19. Mittal et al., State of Generative AI in the Enterprise: Quarter two report, p. 17.

    View in Article
  20. Jim Rowan et al., State of Generative AI in the Enterprise: Quarter three report, Deloitte, August 2024, p. 21.

    View in Article
  21. Allison Nathan, “GenAI: Too much spend, too little benefit?,” Goldman Sachs’ Top of Mind 129 (June 25, 2024), p. 1.

    View in Article
  22. NYU Tanden School of Engineering, “Women may pay a ‘MOM PENALTY’ when AI is used in hiring, new research suggests,” press release, December 12, 2023; Kate Glazko et al., “Identifying and improving disability bias in GPT-based resume screening,” in The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), June 3–6, 2024, Rio de Janeiro, Brazil. ACM, New York, NY, USA.

    View in Article
  23. Grossman, “Rebuilding trust to reach AI’s potential.”

    View in Article
  24. James Thomason, “Why small language models are the next big thing in AI,” Venture Beat, April 12, 2024.

    View in Article
  25. World models understand and interact with the three-dimensional world in contrast to language-based AI models. Marina Temkin, “Fei-Fei Li’s World Labs comes out of stealth with $230M in funding,” TechCrunch, September 13, 2024; Melissa Heikkilä and Will Douglas Heaven, “Yann LeCun has a bold new vision for the future of AI,” MIT Technology Review, June 24, 2022.

    View in Article
  26. I. Glenn Cohen, Theodoros Evgeniou, and Martin Husovec, “Navigating the new risks and regulatory challenges of GenAI,” Harvard Business Review, November 20, 2023.

    View in Article
  27. Rowan et al., State of Generative AI in the Enterprise: Quarter three report, p. 5.

    View in Article
  28. Cameron Coles, “The cubicle culprits: How in-office employees are leading the charge in corporate data exfiltration,” Cyberhaven, March 2024.

    View in Article
  29. Steve Hatfield et al., The time is now for the quantified organization, Deloitte, February 2024.

    View in Article
  30. Adam S. Baldridge, Edward D. Lanquist, and Dominic Rota, “Generative artificial intelligence asks questions of innovation in patent law,” Baker Donelson, September 21, 2023.

    View in Article
  31. Elizabeth Gibney, “The AI revolution is coming to robots: How will it change them?,” Nature, updated May 31, 2024.

    View in Article
  32. Mittal et al., State of Generative AI in the Enterprise: Quarter two report, p. 21.

    View in Article
  33. Scearce et al., What if? The art of scenario thinking for nonprofits.

    View in Article

Acknowledgments

The authors wish to thank everyone who shared their expertise and dedication.

Subject matter experts and advisers (in alphabetical order): Beena Ammanath, Chris Arkenberg, Gabriella Boros, Sue Cantrell, John Day, Ankit Dhameja, David Jarvis, Diana Kearns-Manolatos, Tiffany Kim, Florian Klein, Jeff Loucks, Cole Oman, Jim Rowan, and Rod Sides.

Core research team: Ireen Jose, Aditya Narayan, Saurabh Rijhwani, and Negina Rood.

Deloitte Insights production team: Hannah Bachman, Prodyut Borah, Corrie Commisso, and Blyth Hurley.

Cover image by: Jim Slatton