Building and maintaining health care consumers’ trust in generative AI

Deloitte’s latest Consumer Health Care survey shows that building consumer trust in gen AI is crucial to help fully harness the transformative technology’s potential in health care

Bill Fera

United States

Jennifer A Sullivan

United States

As some businesses begin to use generative AI, earning and preserving consumer trust can be a paramount challenge. If trust is not secured, consumer engagement might decrease, causing businesses to potentially miss out on the transformative benefits this technology can offer.

This is likely true for health care organizations, which tend to face unique challenges when it comes to the adoption of generative AI tools. These organizations handle highly sensitive, personal data, and decisions based on AI outputs can have life-altering consequences on people and their health. Therefore, it’s critically important to ensure AI-generated results are both accurate and reliable. The industry is also heavily regulated, so any use of new technologies must comply with a myriad of regulations related to patient privacy, data security, and ethical considerations. Given these challenges, it’s important for health care organizations to build and maintain consumer trust in their use of generative AI.

To better understand the challenges health care organizations may be facing in building that trust, the Deloitte Center for Health Solutions surveyed more than 2,000 US adults in March 2024 about their use of gen AI in health care. The findings show that, overall, consumers continue to be optimistic about the potential of gen AI to address health care challenges like access and affordability, as we found in our 2023 consumer survey. And, of the consumers who have used gen AI for health reasons,1 66% think it could potentially reduce extended wait times for doctor’s appointments and lower individual health care costs.

Despite that optimism, consumers’ adoption of gen AI for health reasons hasn’t progressed meaningfully over the last year. In fact, the survey shows that consumer adoption of gen AI for health reasons has remained flat, with just 37% of consumers using it in 2024 versus 40% in 2023 (these findings are close and within the margin of error for the survey). Furthermore, one of the most prominent and growing reasons for the stagnant adoption is distrust in the information that the tool produces (figure 1). When asked why they’re not using gen AI for health and wellness purposes, more consumers chose “I don’t trust the information” in this year’s survey (30%) than they did in 2023 (23%).

Compared to last year’s survey, consumers’ distrust in gen AI–provided information has increased among all age groups, with a particularly sharp increase among two key demographic groups: millennials and baby boomers. In 2024, 30% of millennials expressed distrust in the information, up from 21% in 2023. In a similar trend, the percentage of baby boomers expressing distrust rose to 32% in 2024, up from 24% in 2023.

How health care organizations can increase consumer trust in generative AI

As it stands, consumers are generally using free and publicly available gen AI tools to engage with this technology for health and wellness purposes.2 However, due to the continually developing nature of the technology, these versions may sometimes provide inaccurate information, which can lead to diminished consumer trust.3 This presents an opportunity for health care organizations to bolster trust by educating consumers, providing them with gen AI tools specifically designed for health care applications, and addressing privacy concerns. Here are a few key considerations to help organizations effectively engage consumers—while earning their trust—and improve the adoption of gen AI:

  • Engage clinicians as change agents. Consumers tend to place high trust in and rely heavily on the expertise of clinicians when it comes to their health and well-being. According to our survey, 74% of the respondents view doctors as their most trusted source of information for health care treatment options. These clinicians could serve as key influencers, educating consumers about the potential advantages of provider curated and monitored gen AI tools, such as facilitating faster and more accurate diagnoses, and delivering personalized care. They also could help present this information in a way that consumers easily understand, thereby increasing their trust in gen AI. In fact, our survey indicates that most consumers are comfortable with their doctors using gen AI to convey information about new treatments (71%), interpret diagnostic results (65%), and even diagnose conditions and illnesses (53%).

To incorporate gen AI into health care, it’s important that health care organizations first gain the clinicians’ endorsement. While some clinicians are enthusiastic about leveraging gen AI to augment their care delivery capabilities, 41% of physicians had concerns about patient privacy, and 39% were worried about the impact on the patient-physician relationship.4 Coupled with a general skepticism toward the technology’s role in clinical care, these issues will likely hinder the adoption of gen AI.5

To help overcome these challenges, health care organizations should revise their policies and procedures, ensuring that gen AI tools in use comply with all regulatory laws pertaining to the storage of protected health information, as well as the Health Insurance Portability and Accountability Act, and any other relevant state privacy laws.

Incorporating gen AI into the medical school curriculum can also be beneficial. This would allow clinicians to not only understand the benefits but also recognize potential limitations, such as possible biases within the algorithm, and propose new ways to address them. Ensuring that the information generated does not contribute to inequities will likely help clinicians feel comfortable accepting and promoting the use of generative AI among patients.

  • Be transparent with consumers. While consumers are generally comfortable with providers using gen AI for making health care decisions, they want clarity about its usage. In our survey, 80% of consumers said they’d like to be informed about how their health care provider is using gen AI to augment health care decisions, identify treatment options, and provide support. Additionally, of the consumers who are not currently using gen AI themselves, 64% are supportive of their health care providers using it for care delivery. This interest, however, is contingent on the assurance that their personal data is being handled responsibly and protected securely.

To try to meet consumer needs and alleviate their concerns, health care organizations should consider developing transparent processes and designing regulatory and patient protection programs. This involves providing consumers with clear information about data collection methods, usage, and safeguarding, as well as educating them about the limitations of the technology.

Implementing a gen AI framework that emphasizes transparency, explainability, monitoring, and assessment could significantly build consumer trust.6 For example, a clinical recommendation that has been generated with the assistance of gen AI may require a disclaimer stating that it was system derived. Along with this, consumers should be provided with accessible data or explanations as to why that recommendation was made.

  • Enlist community partners as advocates of the technology. Health care organizations could focus on sharing information about gen AI with credible community organizations, equipping them to address questions about the technology, its applications, and its effectiveness for the communities they represent. This approach involves identifying potential partners like community health centers, state and local health agencies, faith-based organizations, and others. These entities already have the trust of consumers and act as reliable sources of health care information across various demographics.7 By aligning their messaging with these organizations, health care companies can effectively enhance consumer understanding and acceptance of gen AI on a wider scale.

The future of gen AI in health care is full of potential—especially if consumer trust can be established and sustained. The path to success involves not only technological progress, but also the capacity of health care organizations to align this technology with the values, expectations, and trust of the consumers they cater to. With that commitment, generative AI could be more than a transformative tool, it could become a trusted ally in the pursuit of better health outcomes and more affordable health care.

BY

Bill Fera

United States

Jennifer A Sullivan

United States

Endnotes

  1. In our survey, health reasons included options such as learning more about health conditions, looking for treatment options for conditions, recommendations for where to seek care, finding, comparing health plans and providers.

    View in Article
  2. Salesforce, “More than half of generative ai adopters use unapproved tools at work,” November 15, 2023. 

    View in Article
  3. Digital Government of New Zealand, “Managing the risks of gen AI to the public service,” accessed May 23, 2024. 

    View in Article
  4. Shubham Sharma, Lack of trust slowing down AI revolution in medical settings: GE healthcare report, Venture Beat, June 6, 2023. 

    View in Article
  5. Dave Pearson, “Physicians are embracing clinical genAI—in theory, at least,” AI in Healthcare, April 17, 2024. 

    View in Article
  6. Deloitte, Trust in the era of generative AI, accessed May 23, 2024; Deloitte, At the nexus of health care and generative AI, accessed May 23, 2024.

    View in Article
  7. Leslie Read, Leslie Korenda, and Heather Nelson, “Rebuilding trust in health care,” Deloitte Insights, August 5, 2021. 

    View in Article

Acknowledgments

The authors would like to thank the following professionals from the project team: Leslie Korenda for being instrumental to the scoping, research, analysis, and review of the paper; Richa Malhotra for assistance in the survey design; and Wendy Gerhardt for invaluable guidance on shaping the project and helping review the paper.

The authors would also like to thank Asif Dhar (MD), Jay Bhatt (DO), Peggah Khorrami, Steve Davis, Elya Papoyan, Rebecca Knutsen, Prodyut Ranjan Borah, Christina Giambrone, and the many others who contributed to the success of this project.

Cover image by: Harry Wedel