Posted: 09 May 2024 5 min. read

AI in health care: Balancing innovation, trust, and new regs

By the Deloitte Center for Health Solutions

Artificial intelligence and Generative AI could touch virtually every industry, but the technology could pose unique challenges for health care and life sciences. The use of AI in diagnostics, decision-making, claims processing, or coverage decisions, for example, could lead to new risks related to patient care, safety, and discrimination.

During an April 30 webcast, four of Deloitte’s AI thought leaders outlined some of the key issues that are shaping the development and use of AI in health care and life sciences (click here for a replay of the presentation, Emerging artificial intelligence policy).

Instilling sustainable trust in AI could be a significant hurdle. Issues related to trust have historically slowed the adoption of new technologies—from the start of the industrial revolution in the 1700s to the computing revolution in the last century, explained Asif Dhar, M.D., US Life Sciences & Health Care leader, Deloitte Consulting LLP. Without trust, consumers, clinicians, and organizations  are unlikely to maximize Generative AI solutions (see From code to cure, how generative AI can reshape the health frontier).

Bill Fera, M.D., principal, Deloitte Consulting LLP, agreed and said industries often don’t pay enough attention to the impact of trust when it comes to accepting new technology (see Overcoming generative AI implementation blind spots in health care). Generative AI, he told attendees, holds the promise of deepening and restoring trust in health care, but it also has the potential to exacerbate mistrust and introduce new skepticism among consumers and other health care stakeholders. For example, if the data used to train AI models is biased or not balanced, the information being generated might not be reliable. The technology has also been shown to “hallucinate” and generate false information if it hasn’t been trained on an appropriate data set or tuned for context to generate accurate information.

Bill encouraged health care and life sciences organizations to establish a center of excellence with appropriate governance structures and trustworthy frameworks. Organizations could combine or augment traditional governance constructs (e.g., policy, accountability) with differential ones such as ethics review, bias testing, and surveillance. However, a recent Deloitte survey of industry executives found that only about 60% said they had developed an overall governance framework, and 45% are prioritizing the building of trust among consumers to share and use their data (From Fax machines to GenAI, are health systems ready?)

Lawmakers, regulators are building AI rules

Establishing guardrails for the safe and appropriate use of AI is a priority for the White House, Congress, states, and multiple federal agencies,1 as well as for governments around the world.2 Policy and regulations can help stimulate the creation and the adoption of AI frameworks and encourage trust. The technology, however, is still a couple steps ahead of policy makers, according to Anne Phelps, the US Health Care Regulatory leader for Deloitte.

Last October, President Biden signed an executive order (EO) aimed at setting some parameters for the use of AI across all industries, including the creation of a national privacy law.3 However, Anne stated that health care is somewhat unique given that the industry has been governed by a national privacy law—the Health Insurance Portability and Accountability Act (HIPAA)—for nearly two decades. She explained that HIPAA provides a framework for when patient information can be used and when patient consent is needed.4 “As Congress debates a possible national privacy law, it will be interesting to see how well it builds off of the HIPAA framework,” she said. In addition, Anne discussed other critical policy issues such as creating transparency for consumers on how AI tools are being used. She also talked about removing bias in data and defining levels of risk, such as when human intervention should be involved for issues related to patient safety and care.

The Department of Health and Human Services (HHS) recently finalized a rule that will require more transparency around AI and machine learning.5 The Centers for Medicare & Medicaid Services has clarified that Medicare Advantage (MA) organizations can use AI and related technologies to assist in making coverage determinations. But such technologies cannot override standards related to medical necessity and other coverage determinations.6

HHS and CMS tend to be seen as the agencies with the most direct impact on health care and life sciences. But other agencies, including some that have not historically regulated health care, are beginning to exert enforcement authority over health information as it relates to AI. Here are a few examples:

  • The U.S. Food & Drug Administration (FDA): The agency issued draft guidance on AI and machine learning last year and continues to evaluate the use of AI and its applications for drugs, biologics, and medical devices.7 The number of applications coming into the FDA using AI has increased significantly.8
  • The Office of Civil Rights (OCR): HHS’s OCR issued a Final Rule on April 26 to strengthen nondiscrimination protections, address biases in health technology, and protect patients when AI is being used in health care. The rule clarifies that “nondiscrimination in health programs and activities continues to apply to the use of AI, clinical algorithms, predictive analytics, and other tools,” according to OCR. 9
  • The Office of the National Coordinator (ONC) for Health IT: In January, the ONC published a Final Rule that included requirements for AI and other predictive algorithms.10 Health IT developers would be required to make information available on the development, evaluation, fairness, effectiveness, and ongoing monitoring for predictive decision-support technologies that interface with electronic health records.
  • The Federal Trade Commission (FTC): The FTC has taken an active role on health care privacy and AI. The Commission has not yet issued formal rulemaking on AI but has provided blog posts and guidance signaling potential future enforcement actions.11

Anne noted that many federal agencies are trying to keep up with the rapidly evolving technology by hiring technologists that can help them understand the deployment of AI on a variety of different levels.12 Both the House and Senate have held hearings on the use of AI, and more legislation is likely to be introduced.13 In addition, at least 16 states have enacted AI-related laws.14 The European Union has also been developing legislation and frameworks.15 Asif agreed that laws and policies could help instill trust and help ensure AI continues to advance safely. But he noted that unlike other products and services that are regulated, AI is undergoing constant change and continuous improvements.

Mitigating biases and addressing health equity

Algorithms built on biased data sets could generate inaccurate predictions or perpetuate health inequities by age, ethnicity, gender, or race. Policymakers say they are concerned with identifying and mitigating bias from underlying data sets and protecting consumers from AI being used to perpetuate discrimination or health care inequities.

AI-enabled technologies are often built off data that is generated by humans who have built in individual and systemic biases, explained Jay Bhatt, D.O., managing director of the Deloitte Health Equity Institute and the Deloitte Center for Health Solutions. While technology has advanced, health care data itself has remained largely the same. Training AI on biased or incomplete data related to age, ethnicity, gender, or race age could lead to decisions that may have unintended consequences. Inequities in the US health system costs approximately $320 billion a year, an amount that could grow to $1 trillion by 2040 if left unaddressed (see US health can't afford health inequities). While AI has the potential to make health care more equitable, eight out of 10 health equity leaders are not at the table when AI strategy is being developed, according to the results of a recent Deloitte survey (see our 2024 Outlook for Health Equity). That is likely to change if AI becomes a more integrated part of health care delivery, he said.

Conclusion

Generative AI (a subset of AI) has the potential to address many of the sector’s most challenging issues (e.g., access, patient wait times, administrative burdens, staff burnout) and could revolutionize the way health care is delivered. The technology could help clinicians, care teams, and patients develop far more personalized care plans or help patients offset behaviors that can negatively impact their health. It could also help health care organizations develop models that can predict which patients are at risk of developing certain diseases, have unmet needs, or could benefit from particular interventions.

While these technologies have the potential to revolutionize diagnostics, decision-making processes, and administrative tasks, they also introduce risks related to patient care, safety, and discrimination. Trust appears to remain a central issue in the adoption of AI, with concerns around data bias, reliability, and the potential for false information generation. To help mitigate these challenges, industry leaders should focus on governance structures and trustworthy frameworks, coupled with the introduction of policies and regulations for safe and equitable AI use.

Editor’s note: On May 8, James Bush, a principal at Deloitte Consulting LLP, led a discussion that explored how AI capabilities are helping to transform the health care ecosystem. Panelists included Eve Cunningham, M.D., group vice president and chief of virtual care and digital health at Providence; and Shane Hochradel, COO of health solutions at Elevance Health. Click here view a recording of the presentation, Capturing value from AI in business and health).

Latest news from @DeloitteHealth

This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.

Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

Endnotes:

1Memorandum for the heads of executive departments and agencies, Executive Office of the President, Office of Management and Budget, WhiteHouse.gov, March 28, 2024

2Europe's AI Act: How does it work and what happens next?, Associated Press, March 13, 2024

3FACT SHEET: President Biden Issues Executive Order on safe, secure, and trustworthy AI, WhiteHouse.gov, October 30, 2023

4Health information privacy, HHS

5HHS finalizes rule to advance health IT interoperability and transparency, HHS, December 13, 2023

6Medicare Advantage plans can’t deny care with AI, CMS warns, STAT News, February 7, 2024

7Artificial Intelligence and medical products, FDA, March 30, 2024

8FDA sees rapid uptick in drug and biologic submissions with AI/ML components, July 12, 2023

9New rule to strengthen nondiscrimination protections and advance civil rights in health care, HHS, April 26, 2024

10HHS finalizes rule to advance health IT interoperability and algorithm transparency, December 13, 2023

11PrivacyCon looks at latest research into AI, mobile device security, health privacy, deepfakes, and more, FTC, February 27, 2024

12Interest high in federal AI-related positions, FEDweek, May 3, 2024

13Senate holds hearing on policy considerations for AI in health care, American Hospital Association, November 8, 2023

1416 states have AI laws, most of them to curb profiling, Legal Dive, March 20, 2024

15Europe's AI Act: How does it work and what happens next?, Associated Press, March 13, 2024

Return to the Health Forward home page to discover more insights from our leaders.

Subscribe to the Health Forward blog via email