While generative artificial intelligence offers significant opportunities to improve organizational products and practices, the technology also introduces new risks, both internal and external. Deloitte’s fourth quarter State of Generative AI in the Enterprise study finds that managing risks and regulatory compliance are the top two concerns among global respondents when it comes to scaling their gen AI strategies.1
The challenges are intersectional, cutting across questions of data provenance, security, and how to navigate a still-maturing marketplace. Cybersecurity investments can help organizations address these challenges: Nearly three quarters (73%) of respondents in Deloitte’s third quarter State of Generative AI in the Enterprise study say they plan to increase their cyber investments because of gen AI programs,2 and 59% of US respondents in Deloitte’s Focusing on the foundation: How digital transformation investments have changed in 2024 study had invested in cyber capabilities in the previous 12 months.3
But as gen AI continues to introduce new risks, leaders need a way to make sense of this world and channel their cyber investments into strategies that work today—and well into the future.
The good news is that while new risks are emerging and converging, leading practices are also evolving — practices that can help shape the future of enterprise risk management, cyber, data, and engineering. In this final installment of Deloitte’s Engineering in the Age of Generative AI series, we focus on building a framework that cyber and risk leaders can consider when assessing internal and external gen AI risks and help them develop strategies to mitigate those risks by:
Our analysis of the fourth edition of Deloitte’s Global Future of Cyber Survey, which surveyed nearly 1,200 cyber decision-makers at the director level or higher, identified eight potential risks specific to gen AI, ranging from integrity risks like hallucinations, to social engineering attacks, to inadequate governance of gen AI strategies. Original analysis of that data for this research finds that respondents are equally worried about all of them, with 77% saying they were concerned “to a large extent” about how these risks may impact their cybersecurity strategies.
To provide a clearer understanding of the intersectional nature of these threats and the areas they impact, we can organize these gen AI risks into four distinct categories: risks to the enterprise, which include threats to organizational operations and data; risks to gen AI capabilities, which include the potential for AI systems to malfunction or their vulnerabilities to be misused; risks from adversarial AI, which include threats posed by malicious actors leveraging gen AI; and risks from the marketplace, which include economic, legal, and competitive pressures that could influence AI deployment and security (figure 1).
In the following sections, we explore each of these four risk categories in more detail to understand their potential implications and the strategies that can be employed to help manage them.
Gen AI introduces increased enterprise risk across data, applications, infrastructure, and processes. Our research shows some of the most prevalent are:
Data privacy, security, and intellectual property risks. Gen AI models are trained on large, diverse collections of text, images, and audio that are gathered from the web, generated by other models, or manually curated. Third-party data and models often don’t authenticate original sources, creator intentions, copyrights, or basic properties, making provenance hard to track4 and perpetuating potential hallucinations and misinformation.5
Additionally, gen AI is now creating art, music, literature, software, and inventions, leading to questions about authorship, ownership, and protection. Regulation around who owns what innovation and thus who can monetize it is still in the works.6 This has increased copyright ambiguity.
Gen AI also raises privacy concerns. Personal details like names and addresses might be collected unintentionally, leading to accidental exposure or misuse of sensitive information, trade secrets, or confidential data. For instance, a health care company might use a retrieval system to access patient records, enabling medical professionals to query a patient’s medical history using natural language, but also exposing that private data to unintended risk.
Security and employee risk across development processes. As gen AI is introduced into development processes, new security risks emerge. A Palo Alto Networks report found that AI-generated code was the top concern for surveyed security and information technology leaders.7 The Deloitte leaders interviewed for this article expressed concerns that AI might inadvertently amplify existing code-level vulnerabilities such as misconfigured code, increasing the risk of data breaches, malware infections, and reputational damage. The concern is grounded in the lack of transparency into third-party foundation models, which may introduce unknown vulnerabilities.
Additionally, employees are frequently using unsanctioned gen AI solutions. According to a 2024 AI Adoption and Risk Report from Cyberhaven Labs, usage of major gen AI tools at work is through personal accounts.8 Unauthorized use can unknowingly expose sensitive information to risk. For instance, Samsung banned the usage of gen AI tools among its employees after it was revealed that employees accidentally leaked sensitive data in public prompts.9
Given the evolving risk landscape, enterprise risk leaders and chief information security officers (CISOs) can consider the following actions to address risks to the enterprise:
In the absence of universal standards, tracing data lineages can help. For instance, the Data & Trust Alliance’s Data Provenance Standard11 is a collaborative effort among 19 corporations to document leading practices.
Strong intellectual property management strategies can embed trust into content creation. One method is using verifiable content credentials for images, videos, fonts, and audio files. Additionally, advanced digital rights management solutions can prevent unauthorized copying, provide real-time tracking, enforce licenses, and control digital asset distribution.12 Some companies have started embedding hidden signals, or watermarks, in their data to identify machine origins or prevent future machine use.13
Gen AI also introduces new security risks that target the data and models that gen AI solutions depend on. Emerging threats include:
Prompt injection attacks. A distinctive feature of gen AI solutions is their use of prompts or instructions given to the AI. Prompt injections are an emerging technique where attackers design prompts to deceive gen AI systems into revealing secure data, spreading misinformation or tricking the prompt into performing a malicious action or access the model via a backdoor with a hidden trigger.15 Prompt injections are the leading security threat aligned to large language model (LLM) applications on the Open Worldwide Application Security Project Top 10.16
Evasion attacks. Traditionally, in AI systems, an evasion attack happens when the model is deliberately misled or tricked by conflicting samples, known as “adversarial examples,” leading to incorrect output. Gen AI systems are often vulnerable to these attacks at a greater scale than traditional AI. Anyone with basic query privileges can probe the model with prompts intended to understand its predictive labels and confidence scores to engineer the model into making different decisions.17 These attacks can be used to bypass intrusion detection systems and more.
Data poisoning. External models trained on public data introduce a host of unknowns. Data poisoning is one such risk: a deliberate attack on an AI system where an adversary alters the model’s training dataset. Data poisoning techniques may include altering data to become deceptive, incorrect, or misleading. The risk of data poisoning increases with retrieval augmented generation systems.
Hallucinations in gen AI models. Gen AI models predict outputs from training data patterns, but when they hallucinate, the results may appear plausible yet be incorrect. Such inaccuracies can cause faulty decisions, damaged reputations, regulatory penalties, and lost opportunities.18 Like hallucinations, misinformation can be perpetrated through gen AI models, which may lead to loss of trust, financial loss, and negative impact on business decisions.
CISOs and chief technology officers should account for these increased risks and attack surfaces. This should include a broad approach to cybersecurity that encompasses various strategies and measures, including:
Second, instead of depending on rules-based solutions, organizations could enhance their defenses by implementing an AI firewall to monitor data entering and exiting the model, which can enable better threat detection.19 While tools like prompt shields can help detect hidden instructions, human oversight remains critical to judge and approve the generated output.20
AI-powered deepfake detection tools can also outperform humans in identifying subtle indicators to identify misinformation and machine-generated content—whether text, audio, images, or multiple formats in combination—on a large scale.
The National Institute of Standards and Technology recommends adversarial training for data models25 to replicate cyberattacks and help identify security gaps and fortify defenses. General adversarial networks (GANs) and smart vulnerability detection tools can uncover security flaws that traditional scanners may overlook.26
Gen AI technologies commoditize the skills needed to orchestrate cyberattacks, lowering the entry barrier for malicious actors. The fourth edition of Deloitte Global Future of Cyber survey shows close to one-third of organizations are concerned about phishing, malware, or ransomware (34%), and threats related to data loss (28%).27 Gen AI can increase the sophistication, scale, and ease of adversarial attacks. These types of risks may include:
AI-generated malware. Gen AI introduces more data and complexity into the digital ecosystem and enables attackers to automate malware and ransomware more easily, increasing both the scale and sophistication of known threats. For instance, hackers continuously create new malware and cybercriminals can now leverage gen AI to produce near limitless, sophisticated malware, potentially overwhelming conventional cybersecurity measures and response times.28 A study uncovered about 100 machine learning models capable of injecting insecure AI code onto user machines.29
Phishing attacks that seem more human. Gen AI has revolutionized phishing attacks. IBM’s 2024 X-Force Threat Intelligence Index found “AI” and “GPT” were mentioned in over 800,000 dark web posts in 2023.30 Adversaries can exploit technology to craft believable messages and automate large-scale, sophisticated social engineering campaigns. Gen AI tools have commoditized the ability to draft convincing, context-sensitive messages, text and audio messages—overcoming language barriers, and even integrating cultural subtleties.31 Moreover, AI’s capability to self-improve could enable it to develop autonomous phishing tactics based on its acquired knowledge.32
Impersonation attacks. The ability to create fake voices and videos through gen AI tools is an emerging risk that has matured over the last three years,33 significantly reducing the barriers to perpetuating impersonation fraud. For example, in banking, gen AI is expected to magnify the risk of deepfakes for financial fraud. Deloitte’s Center for Financial Services forecasts that gen AI could enable fraud losses to reach US$40 billion in the United States by 2027, up from US$12.3 billion in 2023, a compound annual growth rate of 32%.34
CISOs can evolve their strategies to address adversarial AI risks by:
Broader market risks are still taking shape, and many are largely outside of organizations’ control, including regulatory risk, infrastructure resilience, third-party risk, and others. Some leading risks include:
Regulatory uncertainties. According to Deloitte’s fourth quarter State of Generative AI in the Enterprise report, being able to comply with regulations is the biggest concern reported by organizations surveyed. These can span global and regional regulations that govern what’s possible and dictate the use of data, security and privacy considerations integral to managing gen AI risks. For instance, in the United States, the 2023 executive order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” included a provision that directed federal agencies to impose reporting requirements on entities developing, acquiring, or possessing advanced AI models or computing clusters.38 However, in 2025, the new administration issued subsequent executive orders that rescinded order 14110 and directed federal agencies to review existing AI policies and roll back any rules that may hinder AI innovation.39 As a result, the reporting requirements have been rescinded. Organizations would have to be mindful of the evolving regulatory landscape’s implications on gen AI offerings.
Computing infrastructure risk as gen AI scales. Gen AI demands significant computing resources, putting pressure on the already strained electric grid. Public utilities are often not agile, and it can be challenging to accurately estimate future energy demand.40 The aging infrastructure is struggling to keep up. According to Deloitte’s 2025 power and utilities industry survey, respondents cited grid infrastructure limitations as the key challenge in providing reliable power to data centers.41 According to Deloitte analysis, data centers, which currently consume 6% to 8% of total annual electricity generation in the United States, could potentially see their electricity needs grow to between 11% and 15% by 2030.42
New value chains. Data centers face new uncertainties, and thus risks and opportunities, across their value chains. Many investment firms, real estate companies, and engineering construction organizations are working to access permits, land, and funding for new data centers. Many cloud hyperscalers, telecommunication companies, and tech infrastructure providers are working to manage increased computing demands. Limited electricity supply in US locations such as Northern Virginia, Columbus, and Pittsburgh can delay new project approvals.43 In the United Kingdom and Europe, insufficient power has hindered the construction of new data centers.44 Additionally, supply chain bottlenecks can cause delays and higher costs due to shortages of critical components and materials. For example, many operators are facing a delay in securing crucial equipment like generators, uninterruptible power supply batteries, transformers, servers, and building materials, forcing them to use available alternatives.45 Along with these challenges, enterprises and data center operators have to manage these increased loads.
Limited application flexibility caused by vendor lock-in. Gen AI models and infrastructure are advancing faster than organizations can keep pace, introducing a risk that leaders could be paying for obsolete or duplicative capabilities or partnering with vendors that don’t ensure their products interoperate easily with other future technologies.
In addition, many organizations are rushing to secure advanced hardware for gen AI, but suppliers are struggling to keep up with the demand. NVIDIA’s Blackwell graphic processing units (GPUs) are sold out for the next year, prompting companies to seek alternatives.46 Organizations that rely on a single vendor risk missing key hardware advancements in this competitive market.47
Value realization concerns. The initial investment required for training and running large models along with the necessary computing hardware is substantial. According to Deloitte’s fourth quarter State of Generative AI in the Enterprise survey, about one-third of respondents believe that not achieving the expected value could slow the overall marketplace adoption of generative AI over the next two years.
Based on the extent the organization is impacted by supply-side or demand-side risks (or both), leaders are likely exploring several solutions, including:
The challenge with this approach is that companies have invested in these GPUs and still need scalable energy access. For example, an organization focusing on gen AI-based language translation and dubbing services initially purchased a four-GPU cluster, estimating a 60% to 70% cost-savings relative to the cloud. However, they hadn’t accounted for the power and cooling requirements that would come from running six machines. Ultimately the company opted for a hybrid approach with simple workloads performed on premises and more resource-intensive computing in the cloud.55
Resource efficient hardware, along with efficient workload balancing, will play a crucial role in managing power consumption and efficiently utilizing available hardware. NVIDIA introduced its Blackwell GPU platform whose processors are designed to handle the demanding workloads of LLM inference while significantly reducing energy consumption.56 IBM has also revealed its NorthPole chip was 25 times more energy-efficient than the other commonly used 12 nm GPUs and 14 nm CPUs.57
On-premises alternatives—including edge AI infrastructure solutions—are also gaining prominence, and new leading practices in cooling, construction, and collaboration are emerging. Cooling requirements can constitute up to 40% of energy consumption in data centers.58 As workloads increase and power demands generate more heat, liquid cooling systems—which circulate coolant directly to heat-producing components—have been used to manage data center power demand more efficiently.59
Given the challenge goes beyond data centers to a larger energy load challenge, there’s also a need for alternative energy sources to fill the gap. While modernization of the energy grid and smart meters are certainly options, the innovation needed to support the scale of energy goes well beyond what a modernized grid can handle. As a result, alternative energy solutions are also being explored which can complement the existing infrastructure. For example, Google’s upcoming data center in Mesa, Arizona, expected to be finished by 2025, will use over 400 megawatts of clean energy from three sources: solar photovoltaic panels, wind turbines, and battery storage systems.62 Similarly, a German company colocation provider has placed data centers inside wind turbines for clean energy access.63
Like alternative energy, nuclear energy and micro-nuclear strategies are on the rise. There have been reports of increases in micronuclear reactors—compact energy infrastructure available by road, rail, or sea—to meet the colocated demand.64 Amazon acquired a data center site next to Pennsylvania’s Susquehanna nuclear power plant.65 Microsoft agreed to recommission and purchase all of the electricity generated by Three Mile Island’s nuclear power plant for the next 20 years to support its data centers.66 Similarly, the future viability of these energy approaches will be impacted by national energy policy and infrastructure decisions.
Organizations reliant on gen AI as a core business opportunity might need to invest more considerably to transform their infrastructure. For example, Meta has built its own colocated AI factory to support its large gen AI workloads while managing power or cooling needs through unified system design.71 The upfront cost of its colocation strategy is part of a larger investment strategy for the business.72
Gen AI introduces internal and external risks that cut across the four categories discussed in this article—to the enterprise, to gen AI capabilities, from adversarial AI, and from the marketplace. The risks are multidimensional and intertwined in a way that organizations, regardless of industry and strategy, should assess the best way to protect their gen AI integrations. Leaders will likely need to play a crucial role in aligning their cyber and business resilience strategies based on this assessment. Risk leadership, including the CISO, is well positioned to help leaders understand the organizations’ exposure to gen AI risks, as well as how to address them with known and new approaches to protect the organization on all fronts.
A single solution can’t address all the risks, and organizations will likely have to align and customize multiple solutions based on their exposure. CISOs should also confirm that they are following a human in the loop along with a secure-by-design approach across their software development lifecycle to ensure that their organization is safeguarding customer privacy. They should build on their established practices to accommodate the evolved nature of existing risks as well as the nuanced risks of gen AI models and the rapidly evolving infrastructure needs to enable business resilience strategies accordingly.
The insights on gen AI risks and emerging solutions detailed in this report are based on thematic analysis of 10 interviews with Deloitte’s gen AI, risk, and security leaders conducted from July to November 2024 for this research. The research team has also conducted an in-depth literature review and original analysis of survey data from Deloitte’s fourth edition of the Global Future of Cyber survey, collected June 2024.
Part 1 of this series examines how leaders can maintain the quality of digital products when integrating gen AI into the software development life cycle.
Part 2 of the series identifies four engineering obstacles that organizations could address to help enhance data and model quality and fully unlock gen AI’s potential.