Europeans are optimistic about generative AI but there is more to do to close the trust gap

Consumers and employees across Europe clearly acknowledge the benefits of generative AI, although their levels of trust vary depending on the scenario and who is using it

Roxana Corduneanu

United Kingdom

Stacey Winters

United Kingdom

Richard Horton

United Kingdom

Trust as a key to acceptance

The European generative AI market is growing rapidly, offering immense opportunities, but companies must overcome significant challenges to ensure people feel comfortable with the technology. Trust, a cornerstone of widespread acceptance, is particularly crucial. As innovation surges, the future of generative AI will depend on closing the trust gap between organisations, consumers, and the employees who rely on these tools.

Generative AI is changing the technological landscape, both in Europe and globally. The European generative AI (gen AI) market is expanding rapidly, with rising investments approaching US$47.6 billion in value during 20241 and a surge in startups, particularly in France, Germany, the Netherlands, and the United Kingdom.2 In Deloitte’s State of Generative AI in the Enterprise quarter three (Q3) report,3 65% of European business leaders confirmed that they are increasing their investments in gen AI due to the substantial value realised so far.

However, gen AI’s success will not be defined solely by which company invests the most or develops the best algorithms. Instead, it will depend on how effectively employees use these tools and how confident consumers are in gen AI’s benefits. Deloitte defines trust as demonstrating a high degree of competence and the right intent.4 Both of these factors are critical for the successful adoption of any innovative technology, especially gen AI. Is the technology reliable, and does it have the interests of its stakeholders in mind? The pace and scale of its adoption will largely be determined by the trust employees and consumers have in its capabilities.

Deloitte recently surveyed over 30,000 consumers and employees across 11 European countries to assess their trust in gen AI and readiness to adopt these tools. The findings reveal both optimism and notable concerns about the potential risks, signaling a critical trust gap that businesses must address. To ensure gen AI’s long-term success, organisations must prioritise responsible implementation and build trust among employees and consumers.

Understanding European consumers’ and employees’ trust in generative AI: Deloitte’s research methodology

From June 28 to August 12, 2024, Deloitte surveyed 30,252 consumers and employees in Belgium, France, Germany, Ireland, Italy, Poland, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom. Of these respondents, 44% had used gen AI, 22% had not used it but were aware of it, and 34% were either unaware or unsure of any gen AI tools. Respondents unaware of any gen AI tools were excluded from the analysis.

 

All respondents (consumers and employees) familiar with gen AI answered questions about the general use and potential impact of these tools. Employees responded to an additional subset of questions, specifically addressing their use of gen AI in the workplace.

 

The samples are nationally representative within each of the 11 countries and reported data weighted by age, working status (interlocked with gender), and region. In addition, education and social grade (for the United Kingdom only) were used for weighting where applicable.

 

To conduct a worker-leader gap analysis, we also leveraged the European cut of Deloitte’s State of Generative AI in the Enterprise Q3 survey data. This survey included data from 705 senior leaders in the United Kingdom, France, Germany, Italy, the Netherlands, and Spain.

 

Deloitte defines generative AI as a branch of artificial intelligence that can generate text, images, video, and other assets in response to a query. These systems, often built using large language models, can interact with humans.

Show more

The fundamentals: Our findings around habits and attitudes of Europeans toward generative AI

How and why Europeans are using generative AI

Awareness of gen AI tools among Europeans varies. Despite widespread media coverage, 34% of respondents remain unaware or unsure of any gen AI tools. Among those familiar with gen AI, nearly half (47%) have used it for personal tasks, while only a quarter (23%) said they have used it for work.

Frequency of generative AI use in Europe

Approximately one-third of gen AI users access these tools at least weekly for personal (30%) and work-related (33%) activities. Personal use primarily focuses on general search and information gathering (47%), followed closely by idea generation (40%). For professional purposes, idea generation ranks highly (40%), alongside summarizing text and general search (both 38%), and content creation or editing (37%). However, what stands out is the role of gen AI in overcoming language barriers, with translation being notably popular for both personal (27%) and work use (30%). This highlights gen AI’s potential in both global communication and information processing.

Widespread optimism among the users of generative AI

The survey reveals widespread optimism among gen AI users. A majority believe that gen AI can help businesses improve products and services (71%), automate routine tasks to improve employee work experiences (66%), and benefit society overall (59%) (figure 1).

In the workplace, 79% of employees using gen AI believe that it will make their jobs easier within the next two years, and 73% expect it to make their roles more enjoyable. In addition, 74% of employees using gen AI want to develop skills to use these tools better, and 69% are excited about the job opportunities gen AI can present. Some 68% believe gen AI will help them stay relevant in their careers. These findings align with the latest Deloitte’s State of Generative AI in the Enterprise Q3 survey, where only 17% of European leaders suggest that cultural resistance among employees is a significant barrier to gen AI deployment, compared to concerns like governance (selected by 27% of European respondents) and complying with regulations (selected by 34% of European respondents).

The areas of concern: Our findings around reservations and mistrust among European users

Despite the optimistic outlook, concerns about responsible use persist among respondents (figure 2). While many users recognise the positive potential of gen AI, only 50% express confidence in their government’s ability to regulate its use effectively, and only 51% trust businesses to use it responsibly. These reservations are amplified by widespread concerns around deepfakes (65%), the spread of misinformation or dissemination of fake news (63%) and the misuse of personal data (62%).

When it comes to building trust, data confidentiality and security rank as top priorities for 66% of gen AI users. They are considered even more important than a proven track record of accuracy (59%) or understanding the AI’s decision-making process (57%).

Notably, 53% of gen AI users in Europe believe adoption would increase if their governments properly regulated the technology. It remains to be seen how regulations such as the EU AI Act (which came into effect on August 1, 2024) will impact consumer trust. However, business leaders agree that regulation is becoming a top priority. Specifically, nearly half of the European leaders in the Q3 State of Generative AI in the Enterprise survey indicated that monitoring regulatory requirements and ensuring compliance are critical strategies for managing gen AI–related risks.

Gen AI nonusers are more hesitant

Our analysis compared respondents who have used gen AI with those who are aware of it but have not used it. Nonusers tend to be more cautious, skeptical, and concerned about potential risks. For instance, 74% of nonusers are concerned about the misuse of personal data, compared to only 62% of users. Similarly, while 57% of users believe that gen AI produces reliable results, this drops to only 33% among nonusers. This skepticism may influence nonusers’ adoption patterns. At the same time, it is also possible that users’ perceptions are influenced by confirmation bias. For example, Deloitte’s Digital Consumer Trends 2024 report5 showed that 36% of gen AI users (versus 25% of nonusers) mistakenly believe that gen AI is always factually accurate.

Show more

The trust gap in organisations’ use of generative AI

Respondents in our study tend to trust the results produced by gen AI for certain hypothetical scenarios to a greater extent than for others (figure 3). Specifically, European consumers tend to trust the results produced by gen AI more when using it themselves, particularly for lower-risk use cases. However, this trust diminishes when organisations use gen AI for scenarios that respondents may perceive as higher risk. This pattern is consistent across different sectors (figure 3).

In the media sector, for instance, 70% of European users trust gen AI to produce summaries of news articles. Still, only 50% of users trust it when journalists use it to write news articles. Similarly, in the public sector, 64% of users would trust gen AI to provide personalised assistance regarding matters like tax returns or benefits claims. Yet only 50% trust government departments to use gen AI in determining their eligibility for social welfare programs.

The survey responses propose a relatively clear inverse correlation between the criticality of the scenario and trust. For instance, summarizing existing news content may be perceived as less risky than generating original content, and using gen AI to provide public service information seems safer than applying it to assess eligibility for social welfare programs, which is indeed considered to be high risk under the EU AI Act.6 In addition, users appear more confident in their ability to obtain accurate results when using the technology themselves, and tend to be more wary of organisations using gen AI.

Interestingly, this trust gap is most evident in complex, high-stakes decisions (as above) and less pronounced in more transactional use cases such as insurance and banking. For example, we found that 58% of users would trust gen AI to provide insurance recommendations, similar to the 54% who trust insurance companies using it for policy pricing. Likewise, 54% would trust gen AI for financial product recommendations, while 50% trust banks using it for creditworthiness assessments. While insurance pricing and creditworthiness assessments are both classified as high risk under the EU AI Act,7 and while users still place more trust in gen AI results when they control its use, it appears that the trust gap narrows in these transactional applications.

“Catch me if you can”: Our findings around compliance among European workers

The rapid adoption of gen AI is creating a governance gap in many organisations. While 63% of employees using gen AI at work report that their company either encourages (44%) or at least allows (19%) its employees to use gen AI for work purposes, nearly a quarter (23%) claim their organisation does not have a policy on gen AI.

In addition, over half of gen AI users (56%) believe employees in their country use gen AI without explicit employer approval, with 12% thinking that this happens “a great deal.” When asked why, half of respondents said that they believe it’s because gen AI enhances productivity and job performance. Worryingly for organisations, 38% claim employees do not see any risks in using unapproved tools, and 34% believe that organisations cannot monitor this use (figure 4).

Still, 30% of European business leaders in Deloitte’s State of Generative AI in the Enterprise Q3 survey cite risk management as a critical barrier to successful gen AI deployment. This highlights the need for organisations to address employee perceptions, raise awareness of the risks and strengthen governance frameworks surrounding these tools. Notably, a “lack of clarity/awareness of company policy” was cited by over a quarter of gen AI users as a reason why they believed employees may use unauthorised tools. Furthermore, only half of respondents using gen AI for work believe that their employers are transparent about the impact of these tools.

The survey also revealed that 25% of employees using gen AI for work access publicly available tools they personally pay for, while only 19% use in-house gen AI platforms developed by their organisation or commissioned from third-party developers. Furthermore, half of European workers who use the technology for work report using free, publicly available gen AI tools.

This raises significant concerns for chief risk officers. Unapproved tools may not adhere to the same data security and privacy standards as vetted ones, potentially exposing confidential company information and violating client confidentiality. Furthermore, these tools may lack proper guardrails and controls, leading to unreliable or inaccurate outputs, flawed decision-making, and reputational damage. One thing is clear: Simply forbidding the use of these tools without providing a viable alternative is not a winning strategy.

Bridging the trust gap: What companies should focus on

The survey outlines a clear path for businesses to build trust and encourage the responsible adoption of gen AI by adopting a trustworthy AI8 approach, focusing on governance, regulatory compliance, and education.

  • Build guardrails and provide the right tools: Educating employees about the risks of using unsanctioned tools is a crucial first step to minimising such dangers, especially as many workers take it upon themselves to stay current with gen AI advancements. With one in four employees in Europe paying out of pocket for access to the latest tools, organisations must collaborate with third-party developers or invest in their own gen AI solutions to produce more accurate, reliable tools while reducing risks. According to European leaders in Deloitte’s State of Generative AI in the Enterprise Q3 survey, only about 30% of the workforce has access to sanctioned gen AI tools. Without a clear gen AI strategy and policy, organisations risk falling behind their employees, who may begin setting their own standards.
  • Ensure adequate training: A robust learning and development program is crucial to maximise gen AI’s potential and minimise risks. This should cover the integration of gen AI into workflows and its ethical, responsible use. Our study shows that fewer than half of employees using gen AI have received adequate training in these areas (46% and 44%, respectively), highlighting the need for further investment in employee development. Proper training will ensure responsible gen AI use while enabling employees to benefit from increased efficiency and productivity and a focus on higher-value tasks.
  • Embrace organisational transparency: Only 51% of European workers in our study believe their employers are transparent about the gen AI’s impact on their roles. Yet Deloitte’s State of Generative AI in the Enterprise Q3 survey identifies transparency as critical for scaling gen AI initiatives from pilot to production. Furthermore, in our current research, we found9 that transparency correlates with increased employee excitement about gen AI opportunities, a stronger desire to upskill, and greater confidence in gen AI’s ability to help them remain relevant in their careers. Investing in transparency can address employee concerns and foster a more open, receptive attitude toward gen AI.
  • Prioritise data privacy: Data privacy and security are critical for building trust, with 66% of gen AI users in our sample citing them as crucial. This concern is echoed by 60% of European leaders in Deloitte’s State of Generative AI in the Enterprise Q3 survey, who are focused on managing data privacy risks. Developing robust frameworks, in collaboration with compliance, risk, and privacy teams, is essential for mitigating these risks. Organisations must recognise that handling consumer data ethically is not optional. Consumers expect their data to be used responsibly, safeguarded from bias and used in a manner beneficial to both parties. Respecting user privacy by limiting data use and storage to its intended purpose and duration, with clear opt-in and opt-out mechanisms is a must.
  • Keep humans in the loop: Maintaining human oversight in gen AI–driven processes is another crucial element to building trust. In Deloitte’s State of Generative AI in the Enterprise Q3 survey, 35% of European business leaders identified human oversight of gen AI–created content as a critical strategy for risk management. Our findings show that users may be reluctant to trust AI, particularly in high-stakes, complex decisions. Organisations should aim to combine human judgement with AI capabilities, especially in areas involving ethical or sensitive implications, to build further confidence in decision-making.

Making gen AI trustworthy: A business necessity

Deloitte’s survey of European consumers and employees reveals a complex relationship with gen AI, marked by both optimism and unease. While users see the potential for gen AI to improve products, services, and work experiences, there are lingering concerns about responsible implementation, data privacy, and the spread of misinformation.

The survey reveals a significant trust gap that businesses must urgently address. Users are more confident when they have control over gen AI, especially for lower-risk tasks. However, this confidence wanes when businesses use gen AI for higher-risk applications. This underscores the need for businesses to prioritise transparency, provide employees with proper tools and training, and address data privacy concerns directly.

Building trust in gen AI is both an ethical imperative and a business necessity. By adopting a trustworthy approach, organisations can unlock gen AI’s potential while mitigating risks and fostering confidence among employees and consumers. Failure to do so could drive consumers to alternative services and lead employees to bypass company policies in favor of publicly available, unsanctioned tools.

By

Roxana Corduneanu

United Kingdom

Stacey Winters

United Kingdom

Jan Michalski

Poland

Richard Horton

United Kingdom

Endnotes

  1. International Data Corporation, “Spending on gen AI solutions in Europe will exceed US$30 billion in 2027, driving the overall European AI market,” press release, March 21, 2024.

    View in Article
  2. Supantha Mukherjee, “UK takes top spot in Europe for gen AI startups, Accel says,” Reuters, June 20, 2024.

    View in Article
  3. Jim Rowan, Brenna Sniderman, Beena Ammanath, David Jarvis, and Costi Perricos, “Now decides next: Moving from potential to performance,” Deloitte, August 2024.

    View in Article
  4. Michael Bondar, Roxana Corduneanu, and Natasha Buckley, “Can you measure trust within your organization?” Deloitte Insights, Feb. 9, 2022.

    View in Article
  5. Deloitte, “Generative AI: 7 million workers and counting,” June 25, 2024.

    View in Article
  6. European Union, “Regulation (EU) 2024/1689: Artificial Intelligence Act,” June 14, 2024.

    View in Article
  7. Valeria Gallo and Suchitra Nair, “EU AI Act: Forging a strategic response,” Deloitte, Sept. 13, 2024; European Union, “Regulation (EU) 2024/1689: Artificial Intelligence Act.”

    View in Article
  8. Deloitte, “Trustworthy AI,” accessed Sept. 27, 2024.

    View in Article
  9. Results from performing a correlation analysis.

    View in Article

Acknowledgments

The authors would like to thank Javier Echaniz, Paul Lee, Ben Stanton, David Thogmartin, Valeria Gallo, Lucia Lucchini, Natasha Buckley, David Levin and Nancy El-Aroussy for their contributions to this article.

Cover image by: Mark Milward