As some businesses begin to use generative AI, earning and preserving consumer trust can be a paramount challenge. If trust is not secured, consumer engagement might decrease, causing businesses to potentially miss out on the transformative benefits this technology can offer.
This is likely true for health care organizations, which tend to face unique challenges when it comes to the adoption of generative AI tools. These organizations handle highly sensitive, personal data, and decisions based on AI outputs can have life-altering consequences on people and their health. Therefore, it’s critically important to ensure AI-generated results are both accurate and reliable. The industry is also heavily regulated, so any use of new technologies must comply with a myriad of regulations related to patient privacy, data security, and ethical considerations. Given these challenges, it’s important for health care organizations to build and maintain consumer trust in their use of generative AI.
To better understand the challenges health care organizations may be facing in building that trust, the Deloitte Center for Health Solutions surveyed more than 2,000 US adults in March 2024 about their use of gen AI in health care. The findings show that, overall, consumers continue to be optimistic about the potential of gen AI to address health care challenges like access and affordability, as we found in our 2023 consumer survey. And, of the consumers who have used gen AI for health reasons,1 66% think it could potentially reduce extended wait times for doctor’s appointments and lower individual health care costs.
Despite that optimism, consumers’ adoption of gen AI for health reasons hasn’t progressed meaningfully over the last year. In fact, the survey shows that consumer adoption of gen AI for health reasons has remained flat, with just 37% of consumers using it in 2024 versus 40% in 2023 (these findings are close and within the margin of error for the survey). Furthermore, one of the most prominent and growing reasons for the stagnant adoption is distrust in the information that the tool produces (figure 1). When asked why they’re not using gen AI for health and wellness purposes, more consumers chose “I don’t trust the information” in this year’s survey (30%) than they did in 2023 (23%).
Compared to last year’s survey, consumers’ distrust in gen AI–provided information has increased among all age groups, with a particularly sharp increase among two key demographic groups: millennials and baby boomers. In 2024, 30% of millennials expressed distrust in the information, up from 21% in 2023. In a similar trend, the percentage of baby boomers expressing distrust rose to 32% in 2024, up from 24% in 2023.
As it stands, consumers are generally using free and publicly available gen AI tools to engage with this technology for health and wellness purposes.2 However, due to the continually developing nature of the technology, these versions may sometimes provide inaccurate information, which can lead to diminished consumer trust.3 This presents an opportunity for health care organizations to bolster trust by educating consumers, providing them with gen AI tools specifically designed for health care applications, and addressing privacy concerns. Here are a few key considerations to help organizations effectively engage consumers—while earning their trust—and improve the adoption of gen AI:
To incorporate gen AI into health care, it’s important that health care organizations first gain the clinicians’ endorsement. While some clinicians are enthusiastic about leveraging gen AI to augment their care delivery capabilities, 41% of physicians had concerns about patient privacy, and 39% were worried about the impact on the patient-physician relationship.4 Coupled with a general skepticism toward the technology’s role in clinical care, these issues will likely hinder the adoption of gen AI.5
To help overcome these challenges, health care organizations should revise their policies and procedures, ensuring that gen AI tools in use comply with all regulatory laws pertaining to the storage of protected health information, as well as the Health Insurance Portability and Accountability Act, and any other relevant state privacy laws.
Incorporating gen AI into the medical school curriculum can also be beneficial. This would allow clinicians to not only understand the benefits but also recognize potential limitations, such as possible biases within the algorithm, and propose new ways to address them. Ensuring that the information generated does not contribute to inequities will likely help clinicians feel comfortable accepting and promoting the use of generative AI among patients.
To try to meet consumer needs and alleviate their concerns, health care organizations should consider developing transparent processes and designing regulatory and patient protection programs. This involves providing consumers with clear information about data collection methods, usage, and safeguarding, as well as educating them about the limitations of the technology.
Implementing a gen AI framework that emphasizes transparency, explainability, monitoring, and assessment could significantly build consumer trust.6 For example, a clinical recommendation that has been generated with the assistance of gen AI may require a disclaimer stating that it was system derived. Along with this, consumers should be provided with accessible data or explanations as to why that recommendation was made.
The future of gen AI in health care is full of potential—especially if consumer trust can be established and sustained. The path to success involves not only technological progress, but also the capacity of health care organizations to align this technology with the values, expectations, and trust of the consumers they cater to. With that commitment, generative AI could be more than a transformative tool, it could become a trusted ally in the pursuit of better health outcomes and more affordable health care.