Few precedents are available to C-suite executives when making decisions concerning generative AI. While leaders across industries are exploring different approaches, the lack of a road map presents a serious challenge when assessing which gen AI decisions are most likely to maximize benefits while mitigating potential risks.
This research aims to reduce that uncertainty. We analyzed over 100 actions respondents said they were currently taking related to their gen AI implementations in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey and their connections to various business outcomes. Our analysis (see methodology) found that respondents taking a high number of trust-related actions when implementing gen AI processes—among the data, workforce, and customer approaches asked about in the survey—were more likely to report that nearly two-thirds (66%) or more of their organization’s expected gen AI benefits had been achieved (figure 1). Respondents in this group might, for example, be implementing watermarks on synthetic data to clarify it was AI-generated to internal and external users.
In addition to higher benefit levels, the group was also more likely to manage risk at above-average levels relative to other actions. Separately, respondents taking a high degree of nine potential risk management actions—such as inventorying implementations or having a gen AI review board, among others1—naturally showed improved risk outcomes; however, their overall benefit achievement levels were also lower (figure 2). For example, an organization in this group might review gen AI vendor policies and see a positive impact on risk outcomes but a negative impact on achieving expected program benefits from gen AI.
Given these nuances, leaders should understand how trust and risk actions differ, where they correspond with different outcomes, and how leaders can act to influence specific feedback loops our analysis uncovered. After all, implementing gen AI within an organization requires understanding the much larger, complex system of technology, processes, and people in which the AI system operates. Use cases, like cars, require data as fuel, roads as information highways, and traffic lights as governance, with interdependencies across all. The driver needs to know how to operate the system, pedestrians need to know how to function within it, and so do other cars on the road (whether self-driving or operated by a human). Trust impacts risk management at these layers, as well as actions the organization takes to build deeper relationships, transparency, and confidence in the AI use case across stakeholders.
While trust and risk management might seem the same, they’re not. Risk management is a subset of trust-building activities, whereas trust is the outcome and the result of a broader set of actions and activities. For example: Cars go through periodic inspections to make sure all components are working correctly. This is about risk management to prevent an unexpected breakdown on the road. The auto mechanic working with the customer to perform inspections on time, reliably, with transparency about what needs to be done and how much it will cost, and with humanity to minimize disruption to the driver, is building trust. Similarly, for a company integrating gen AI, auditing the data for consent and bias helps the organization manage risk. But providing transparency with stakeholders about how the models were built, trained, and monitored for bias, as well as giving people choice about when and how AI is used, is about building confidence and trust. Deloitte research has shown trust is the foundational relationship across an entity and stakeholders, based on actions that demonstrate high competence and positive intent through demonstrated capability, reliability, transparency, and humanity.2 Deloitte’s State of Generative AI in the Enterprise survey asked leaders investing in gen AI how they were acting across these four areas to build trust by design into their technology, data, model, and talent strategies.
High trust respondents (“trust builders”) scored in the top third of respondents in their focus on their data, governance, and security capabilities; reducing algorithmic hallucinations for greater reliability; driving employee transparency; and showing empathy and kindness across tool adoption for greater humanity. To continue with the car metaphor, these respondents are not just following the rules of the road: They’re actively working to improve trust in the fuel, the drivers, the roads, and the system overall.
As a result, trust builders are 18 percentage points more likely to be in the top third of organizations achieving their expected benefits to improve existing products and services, encourage innovation and growth, improve efficiency and productivity, reduce costs, and increase the speed and ease of new development. They are also 18 percentage points more likely to uncover new ideas and insights, increase revenue, enhance client and customer relationships, detect fraud and risk, and shift workers from lower- to higher-level tasks. What’s more, trust builders report only positive outcomes in the analysis across all outcomes analyzed as opposed to other organizations.
Alternatively, risk management actions asked about in the survey focused on overarching process controls and changes to leadership and organizational structures around gen AI activities. “Risk ready” respondents scored among the top third of organizations focused on actions like inventorying gen AI implementations, training practitioners on potential risks, and formalizing risk advisory groups. These foundational actions are important to making sure that you’re buying the right car and that it’s roadworthy.
In an environment in which gen AI technologies and strategies are still maturing, there’s still considerable uncertainty and risk, making risk management actions essential. According to Deloitte’s 2024 State of Generative AI in the Enterprise reports to date, addressing gen AI risk is a top concern among respondents and the majority of respondents across industry don’t believe their organizations are prepared. Therefore, risk management actions are an important strategy to optimize in the gen AI implementation toolbox. And while the list of actions analyzed is by no means comprehensive, the respondents that fall into the risk group signal a stark contrast between those in the trust group. Trust and risk are not strongly correlated (but there is a statistically significant correlation of .204 out of 1). In other words, the overlap in respondents is moderate, but they are not the same, and the analysis examines the separate but related concepts.3
Those that prioritize risk management actions often do a better job of mitigating risk, but risk ready respondents were 16 percentage points less likely to be in the top third of organizations achieving higher gen AI benefit levels than average—and 34 percentage points less likely than trust builders to be achieving those benefits. Not only do trust actions correlate with greater benefits, but trust builders were 15 percentage points more likely to manage gen AI risk outcomes, making it a powerful tool for managing value (benefits and risks) from gen AI programs.
That doesn’t mean that leaders shouldn’t take actions to manage risk. But organizations will likely need to go beyond just risk management and actively work to build trust. There will be potential trade-offs between a narrow focus on risk management standards, models, and measures to reduce risk, and a more holistic trust-building strategy, which focuses on managing both risk and reward through effective governance and policy guardrails. A trust-first view incorporates risk management strategies but goes above and beyond them to offer a full view of AI applications, systems, and models as well as associated risks across technical, regulatory, ethical, and reputational topics. Risk management alone may not be sufficient for organizations to gain the trust needed from workers and customers alike to adopt AI if they don’t trust the system.
Organizations that have built a high degree of trust in their AI programs and processes might be able to move more quickly in implementing gen AI solutions. Employees who trust their employer may be more willing to speak up and course correct when risks are identified. In contrast, if an organization is solely focused on risk, and lacks overall trust, the pace of progress and innovation may be slower.
Here are two hypothetical examples to illustrate the difference between the two approaches:
Company A is focused only on risk management. They’ve developed a steering committee to review all gen AI project proposals across the organization to weigh potential risks and how to measure and manage them. The committee has approved several gen AI proofs of concepts, is reviewing others, and has rejected even more. This group has been a strong mechanism to centrally manage gen AI risks, but they have not focused more broadly on building trust in data, models, and the system overall.
Company B has taken a trust-first approach, which includes training, communication, transparency, and risk management. They’ve implemented a new training program that educates anyone inside of their organization engaging with gen AI on what data the AI has been trained on and its potential gaps in knowledge. The company has implemented new guardrails, which include a tiered access level to models for basic users versus more advanced users. The technology team has also implemented risk management and data and cybersecurity protocols including an AI firewall to better monitor any information input or output from the model. These actions were implemented to improve transparency, data integrity, and reduce the potential for reputational risk.
Both approaches have their merits, and one can complement the other, but a trust-first approach can include risk management, and much more, with the goal of impacting a broader set of outcomes.
Thus, leaders can work to better calibrate their actions if they consider taking the following steps:
Operationalizing trust in AI should include a thoughtful strategy, governance process, and technology enablement to manage the program efficiently and transparently. The strategy should cover three components: establishing AI principles and a management framework that drives AI adoption in line with values and regulations; thoughtful governance processes that cultivate an AI program with informed, responsible stakeholders; and strong technology enablement that can simplify and optimize governance processes.
Leaders who improve data integrity, reduce gen AI algorithmic hallucinations, and create a transparent and empathetic culture are more likely to manage gen AI risks and achieve the greatest rewards, according to this analysis. While this may sound simple, it’s incredibly difficult and may require new data architectures and queries that reflect real-world scenarios, established processes for trusted data exchange across applications and with human oversight, improved transparency into a model’s known universe and parameters, and other emerging solutions detailed in Deloitte’s research on the future of engineering and generative AI.
Leaders can take actions across people, process, and technology to build trust; for example, through AI ethics boards, frameworks and regulatory compliance, and AI inventory and monitoring tools (figure 3).
Leaders are often navigating gen AI implementations without a road map. Trust across the enterprise and with stakeholders will likely be foundational for improved benefits and risk management. On that foundation, organizations can build quality applications and reliable infrastructure for the future.
Deloitte’s Machine Intelligence and Data Science team used a multivariate logistic regression model to analyze probable relationships across different variables and outcomes in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey.
The model took the 1,982 responses from the survey, fielded from February to March of 2024, and scored the responses with a representative number for each question based on the survey’s five-point Likert scale. The model then determined the causal probability by percentage that a variable might be associated with a given outcome. The team controlled certain influential firmographic factors, tuning the model to provide reliable insight on gen AI adoption behaviors and outcomes.
The Deloitte Center for Integrated Research supported in the analysis of the probabilistic relationships, in developing insights that can give business and technology leaders a deeper understanding of driving forces influencing desirable gen AI outcomes—controlling for what they can’t change—and stacking value drivers to achieve multiplier effects. The team used informal path analysis to map critical paths for business leaders.