Generative AI trust actions correlate with better risk management and bigger rewards

Deloitte research shows that leaders who build trust in artificial intelligence are more likely to report higher benefit levels and successfully balance integration and risk

David Levin

United States

Michael Bondar

United States

Beena Ammanath

United States

Few precedents are available to C-suite executives when making decisions concerning generative AI. While leaders across industries are exploring different approaches, the lack of a road map presents a serious challenge when assessing which gen AI decisions are most likely to maximize benefits while mitigating potential risks.

This research aims to reduce that uncertainty. We analyzed over 100 actions respondents said they were currently taking related to their gen AI implementations in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey and their connections to various business outcomes. Our analysis (see methodology) found that respondents taking a high number of trust-related actions when implementing gen AI processes—among the data, workforce, and customer approaches asked about in the survey—were more likely to report that nearly two-thirds (66%) or more of their organization’s expected gen AI benefits had been achieved (figure 1). Respondents in this group might, for example, be implementing watermarks on synthetic data to clarify it was AI-generated to internal and external users.

In addition to higher benefit levels, the group was also more likely to manage risk at above-average levels relative to other actions. Separately, respondents taking a high degree of nine potential risk management actions—such as inventorying implementations or having a gen AI review board, among others1—naturally showed improved risk outcomes; however, their overall benefit achievement levels were also lower (figure 2). For example, an organization in this group might review gen AI vendor policies and see a positive impact on risk outcomes but a negative impact on achieving expected program benefits from gen AI.

Given these nuances, leaders should understand how trust and risk actions differ, where they correspond with different outcomes, and how leaders can act to influence specific feedback loops our analysis uncovered. After all, implementing gen AI within an organization requires understanding the much larger, complex system of technology, processes, and people in which the AI system operates. Use cases, like cars, require data as fuel, roads as information highways, and traffic lights as governance, with interdependencies across all. The driver needs to know how to operate the system, pedestrians need to know how to function within it, and so do other cars on the road (whether self-driving or operated by a human). Trust impacts risk management at these layers, as well as actions the organization takes to build deeper relationships, transparency, and confidence in the AI use case across stakeholders.

Trust-building actions bring all upside to gen AI strategies

While trust and risk management might seem the same, they’re not. Risk management is a subset of trust-building activities, whereas trust is the outcome and the result of a broader set of actions and activities. For example: Cars go through periodic inspections to make sure all components are working correctly. This is about risk management to prevent an unexpected breakdown on the road. The auto mechanic working with the customer to perform inspections on time, reliably, with transparency about what needs to be done and how much it will cost, and with humanity to minimize disruption to the driver, is building trust. Similarly, for a company integrating gen AI, auditing the data for consent and bias helps the organization manage risk. But providing transparency with stakeholders about how the models were built, trained, and monitored for bias, as well as giving people choice about when and how AI is used, is about building confidence and trust. Deloitte research has shown trust is the foundational relationship across an entity and stakeholders, based on actions that demonstrate high competence and positive intent through demonstrated capability, reliability, transparency, and humanity.2 Deloitte’s State of Generative AI in the Enterprise survey asked leaders investing in gen AI how they were acting across these four areas to build trust by design into their technology, data, model, and talent strategies.

High trust respondents (“trust builders”) scored in the top third of respondents in their focus on their data, governance, and security capabilities; reducing algorithmic hallucinations for greater reliability; driving employee transparency; and showing empathy and kindness across tool adoption for greater humanity. To continue with the car metaphor, these respondents are not just following the rules of the road: They’re actively working to improve trust in the fuel, the drivers, the roads, and the system overall.

As a result, trust builders are 18 percentage points more likely to be in the top third of organizations achieving their expected benefits to improve existing products and services, encourage innovation and growth, improve efficiency and productivity, reduce costs, and increase the speed and ease of new development. They are also 18 percentage points more likely to uncover new ideas and insights, increase revenue, enhance client and customer relationships, detect fraud and risk, and shift workers from lower- to higher-level tasks. What’s more, trust builders report only positive outcomes in the analysis across all outcomes analyzed as opposed to other organizations.

Risk management actions, in general, come with a downside for gen AI programs

Alternatively, risk management actions asked about in the survey focused on overarching process controls and changes to leadership and organizational structures around gen AI activities. “Risk ready” respondents scored among the top third of organizations focused on actions like inventorying gen AI implementations, training practitioners on potential risks, and formalizing risk advisory groups. These foundational actions are important to making sure that you’re buying the right car and that it’s roadworthy.

In an environment in which gen AI technologies and strategies are still maturing, there’s still considerable uncertainty and risk, making risk management actions essential. According to Deloitte’s 2024 State of Generative AI in the Enterprise reports to date, addressing gen AI risk is a top concern among respondents and the majority of respondents across industry don’t believe their organizations are prepared. Therefore, risk management actions are an important strategy to optimize in the gen AI implementation toolbox. And while the list of actions analyzed is by no means comprehensive, the respondents that fall into the risk group signal a stark contrast between those in the trust group. Trust and risk are not strongly correlated (but there is a statistically significant correlation of .204 out of 1). In other words, the overlap in respondents is moderate, but they are not the same, and the analysis examines the separate but related concepts.3

Those that prioritize risk management actions often do a better job of mitigating risk, but risk ready respondents were 16 percentage points less likely to be in the top third of organizations achieving higher gen AI benefit levels than average—and 34 percentage points less likely than trust builders to be achieving those benefits. Not only do trust actions correlate with greater benefits, but trust builders were 15 percentage points more likely to manage gen AI risk outcomes, making it a powerful tool for managing value (benefits and risks) from gen AI programs.

What does this mean for leaders?

That doesn’t mean that leaders shouldn’t take actions to manage risk. But organizations will likely need to go beyond just risk management and actively work to build trust. There will be potential trade-offs between a narrow focus on risk management standards, models, and measures to reduce risk, and a more holistic trust-building strategy, which focuses on managing both risk and reward through effective governance and policy guardrails. A trust-first view incorporates risk management strategies but goes above and beyond them to offer a full view of AI applications, systems, and models as well as associated risks across technical, regulatory, ethical, and reputational topics. Risk management alone may not be sufficient for organizations to gain the trust needed from workers and customers alike to adopt AI if they don’t trust the system.

Organizations that have built a high degree of trust in their AI programs and processes might be able to move more quickly in implementing gen AI solutions. Employees who trust their employer may be more willing to speak up and course correct when risks are identified. In contrast, if an organization is solely focused on risk, and lacks overall trust, the pace of progress and innovation may be slower.

Here are two hypothetical examples to illustrate the difference between the two approaches:

Company A is focused only on risk management. They’ve developed a steering committee to review all gen AI project proposals across the organization to weigh potential risks and how to measure and manage them. The committee has approved several gen AI proofs of concepts, is reviewing others, and has rejected even more. This group has been a strong mechanism to centrally manage gen AI risks, but they have not focused more broadly on building trust in data, models, and the system overall.

Company B has taken a trust-first approach, which includes training, communication, transparency, and risk management. They’ve implemented a new training program that educates anyone inside of their organization engaging with gen AI on what data the AI has been trained on and its potential gaps in knowledge. The company has implemented new guardrails, which include a tiered access level to models for basic users versus more advanced users. The technology team has also implemented risk management and data and cybersecurity protocols including an AI firewall to better monitor any information input or output from the model. These actions were implemented to improve transparency, data integrity, and reduce the potential for reputational risk.

Both approaches have their merits, and one can complement the other, but a trust-first approach can include risk management, and much more, with the goal of impacting a broader set of outcomes.

Thus, leaders can work to better calibrate their actions if they consider taking the following steps:  

  • Communicating gen AI strategy to increase transparency and trust in AI: Our analysis shows that having a clearly articulated, transparent gen AI strategy is the strongest predictor that a respondent will believe their organization’s trust in AI in all forms has increased since gen AI was introduced in late 2022.4 Similarly, Deloitte UK’s recent report on European perspectives of gen AI found that transparency positively correlates with increased employee excitement about gen AI opportunities, a stronger desire to upskill, and greater confidence in gen AI’s ability to help employees remain relevant in their careers. Indeed, foundational trust in AI should be a starting point and an amplifier of reported improvements in business process outcomes and productivity.

Operationalizing trust in AI should include a thoughtful strategy, governance process, and technology enablement to manage the program efficiently and transparently. The strategy should cover three components: establishing AI principles and a management framework that drives AI adoption in line with values and regulations; thoughtful governance processes that cultivate an AI program with informed, responsible stakeholders; and strong technology enablement that can simplify and optimize governance processes.

  • Embracing trust-building actions across people, process, and technology: In many cases, building trust in gen AI will look similar to building trust in AI. Leaders can start with Deloitte’s Trustworthy AI framework, which emphasizes governance and regulatory compliance throughout the AI life cycle. The framework is anchored on seven dimensions of trustworthy AI: that it is transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, responsible, and accountable. Importantly, trusted AI is based on human accountability and responsibility for the tool’s behavior, establishing guardrails related to human-in-the-loop gen AI processes at the point of prompting, model generation, and output engagement.

Leaders who improve data integrity, reduce gen AI algorithmic hallucinations, and create a transparent and empathetic culture are more likely to manage gen AI risks and achieve the greatest rewards, according to this analysis. While this may sound simple, it’s incredibly difficult and may require new data architectures and queries that reflect real-world scenarios, established processes for trusted data exchange across applications and with human oversight, improved transparency into a model’s known universe and parameters, and other emerging solutions detailed in Deloitte’s research on the future of engineering and generative AI.

Leaders can take actions across people, process, and technology to build trust; for example, through AI ethics boards, frameworks and regulatory compliance, and AI inventory and monitoring tools (figure 3).

  • Taking calculated risks to reduce friction that could stand in the way of positive outcomes: Given it is important to manage risks related to gen AI, risk-reduction actions will often be necessary. Leaders should ensure they are not creating unnecessary friction, but in the case of gen AI leaders, they may also need to take bold, calculated risks. Our analysis shows respondents in the survey who believe their organization needs to act on gen AI ahead of their industry’s pace of adoption are more likely to be among the top third of organizations improving risk outcomes. In market-making instances, a strategic scenario planning committee may be appropriate to balance evolving business strategies with emerging risks. It’s even possible that the current convergence of factors—tech acceleration, open innovation, global regulation, ambiguous ownership, and hyper-personalization—could create risks so significant to legacy intellectual property models that organizations would have no choice but to act, moving toward more open, data-driven business models and technologies.
  • Factoring trust and risk: While trust is foundational and calculated risks may be necessary, at the end of the day, trust and risk are two sides of the same coin. Leaders should create greater trust in AI (and gen AI) across their workforce, applications, and users, aided by select controls5 that target known (manageable) risks as the central foundation for adoption and scale. The trick will be determining where to focus that energy. The four dimensions of trust, demonstrated capability, reliability, transparency, and humanity, can serve as a guiding force to balance trust and risk actions while being conscious of the relational dynamics.
  • Looking for feedback loops: Given trust and risk are interrelated and mutually reinforcing, leaders can also look to better understand feedback loops. For example, while risk management actions on their own don’t directly correlate with higher benefit levels, our analysis shows that they correspond with higher preparedness levels, which can correspond with higher benefit levels. Additionally, if organizations have already achieved a baseline of trust in AI, they are more likely to be taking additional trust-building actions, which corresponds with continued investment in gen AI. These relationships are still early and feedback loops across the market and within organizations are still in their infancy. The relationships are multiplex. But what is clear at this stage is that a focus on trust is integral to achieving strategic gen AI outcomes.

Acting in an environment of uncertainty

Leaders are often navigating gen AI implementations without a road map. Trust across the enterprise and with stakeholders will likely be foundational for improved benefits and risk management. On that foundation, organizations can build quality applications and reliable infrastructure for the future.

Methodology

Deloitte’s Machine Intelligence and Data Science team used a multivariate logistic regression model to analyze probable relationships across different variables and outcomes in Deloitte’s second quarter 2024 State of Generative AI in the Enterprise survey.

 

The model took the 1,982 responses from the survey, fielded from February to March of 2024, and scored the responses with a representative number for each question based on the survey’s five-point Likert scale. The model then determined the causal probability by percentage that a variable might be associated with a given outcome. The team controlled certain influential firmographic factors, tuning the model to provide reliable insight on gen AI adoption behaviors and outcomes. 
 

The Deloitte Center for Integrated Research supported in the analysis of the probabilistic relationships, in developing insights that can give business and technology leaders a deeper understanding of driving forces influencing desirable gen AI outcomes—controlling for what they can’t change—and stacking value drivers to achieve multiplier effects. The team used informal path analysis to map critical paths for business leaders.

Show more

BY

Diana Kearns-Manolatos

United States

David Levin

United States

Michael Bondar

United States

Beena Ammanath

United States

Endnotes

  1. The full list of generative AI risk management actions asked about in the survey includes: keeping a formal inventory of all generative AI implementations, training practitioners who build and use generative AI systems how to recognize and mitigate potential risks, using a formal group or board to advise on generative AI–related risks, having a single executive responsible for managing generative AI–related risks, conducting internal audits and testing on generative AI tools and applications, using outside vendors to conduct independent audits and testing on generative AI tools and applications, establishing a governance framework for the use of generative AI tools and applications, ensuring a human validates all generative AI–created content, monitoring regulatory requirements, and ensuring compliance.

    View in Article
  2. Michael Bondar, Natasha Buckley, and Roxana Corduneanu, “Can you measure trust within your organization?Deloitte Insights, Feb. 9, 2022.

    View in Article
  3. The partial correlation in a regression model expresses the unique contribution of each independent variable, accounting for the other independent variable (in this case risk and trust). The regression coefficients in the model represent the unique contributions of each.

    View in Article
  4. Respondents were asked: “How has your organization’s trust in all forms of AI changed since generative AI emerged in late 2022?”

    View in Article
  5. Ana Cristina Costa and Katinka Bijlsma-Frankema, “Trust and control interrelations: New perspectives on the trust—control nexus,” Group & Organization Management 32, no. 4 (2007): pp. 392–406.

    View in Article

Acknowledgments

The authors would like to thank all of the many leaders who contributed to the Deloitte State of Generative AI in the Enterprise research program, without which this analysis would not have been possible.

Additionally, they would like to thank the Deloitte colleagues who provided invaluable insight, analysis support, and methodological expertise that was instrumental in shaping this research and the insights in the article (listed alphabetically): Ahmed Alibage, PhD, Brenna Sniderman, Kate Graeff, Natasha Buckley, Roxana Corduneanu, Sameen Salam, and Paula Payton, PhD.

The authors also want to thank Chad Testa, Costi Perricos, Jim Rowan, Justin Joyner, Kate Fusillo Schmidt, and Lena La for their perspective and support.

Finally, they would like to extend their gratitude to the Deloitte Insights editor, Corrie Commisso, designer, Molly Piersol, and production editor, Preetha Devan for their partnership.

Cover image by: Natalie Pfaff; Adobe Stock