Posted: 29 Jul. 2021 12 min. read

Could advanced analytics automate racism in health care?

By Heather Nelson, senior manager, and David Veroff, specialist leader, Deloitte Consulting LLP

Advanced analytics has the potential to transform the way health care organizations make treatment decisions, detect diseases, and identify rare illnesses. This technology also has the potential to exacerbate existing health inequities by embedding the unconscious biases of human designers and developers. This could cause clinicians and other health care leaders to unintentionally make biased decisions.

Advanced analytics, which include machine learning and artificial intelligence (AI), are autonomous and semi-autonomous predictive-data models and tools. Biases that wind up in these tools and models could result in inaccurate clinical decisions, missed diagnoses, worsened clinical outcomes, and substandard patient experiences. For health systems, health plans, health technology firms, and life sciences companies, this can translate to higher costs of care and poorer health among people who already face inequitable outcomes. To avoid these pitfalls, stakeholders should try to identify and eliminate bias throughout the development and application of advanced analytics tools.

Bias has long been embedded in our social structures. As we explained in our recent report on activating health equity, factors including race, gender, age, location, disability status, and sexual orientation can influence access to care—as well as the quality of care—that some patients receive. The resulting health disparities can then lead to biases in how clinicians deliver care, which can further reinforce disparities (as evidenced by racial biases in medical education1). These same factors have also shaped the drivers of health (e.g., access to healthy food, affordable housing, and quality education) which often lead to disparate health outcomes, especially among racially and ethnically diverse communities. Such disparities can affect the inputs of advanced analytics, potentially leading to biased outputs.

We’ve found that health care executives sometimes overlook the importance of these factors when developing more effective analytics. Some of our hospital and health system clients have told us they don’t need to consider the drivers of health in risk models because they serve “a fairly affluent” population. This type of thinking could lead to the development of analytics that exacerbate biases.

Even after the analytics and algorithms have been built, bias can be introduced in the interpretation of AI model outputs. Consider an algorithm that predicts a patient’s likeliness to adhere to a treatment. Clinicians might be less likely to follow up with patients who are assigned a low likelihood to adhere. However, prescription drug costs, transportation access, and other factors could make it difficult for a patient to follow prescribed treatment regimens. Organizations that understand existing inequities and the drivers of health might be able to adjust their technology to account for these barriers or implement programs to address them. Technology that properly accounts for these issues could help address the root causes of behaviors such as non-adherence.

The six stages of the advanced analytics lifecycle, and questions to ask

Bias can creep into advanced analytics at any stage—from initial design to rapid scaling. Health care organizations should ask themselves the following questions during each step to mitigate bias:

  1. Technology design: During the design phase, problems are identified and hypotheses are formed to solve them. Intentional designers should consider structural biases and adjust their problem statement and hypothesis accordingly. This can help ensure that the solution addresses the needs of all users regardless of race, ethnicity, gender, or age. For example, some pulse oximeters, which use an infrared light over a finger or other part of the skin, perform significantly better on white skin than black skin.2 If the designers had considered the effects of skin tone, the final product might have performed more consistently across various skin tones. Questions: Does the technology solution account for existing structural biases within its target population? Does it unintentionally exclude any communities it ends up serving?
  2. Data collection: Advanced analytics are often built on data generated by prior use of health care services. If some groups are under-represented due to long-standing biases, then their health risks could be misrepresented in the resulting data sets and algorithms. Consider this: Researchers recently explored disparities in pain treatment using knee X-rays and a degenerative arthritis algorithm that considers racial and socioeconomic factors. The researchers found that their algorithm more accurately predicted pain in underserved communities when the data set included 20% Black patients as well as lower-income patients and patients who had less education.3 Improved algorithm accuracy, due to more diverse data collection, could help clinicians more accurately diagnose and treat pain. Questions: Does the data represent the population for which the tool is designed? Are any groups over- or under-represented?
  3. Algorithm development: Clinical advanced analytics tools often exist in a ‘black box’. This can make it difficult to explain and test the underlying data sets and algorithms. Clinicians who rely on predictive tools should be able to find answers to questions such as: Does the model account for existing structural biases? Developers should document the components of their advanced analytics (and the underlying data) to ensure transparency and explainability. This can be done through processes such as Datasheets for Datasets.4 Questions: How do predictive and data models perform differently across populations? Are algorithms explainable?
  4. Implementation: Once an algorithm is developed and deployed, it is important to conduct regular audits and understand how the model performs in the real world. Advanced analytics might work well in a test-environment (especially if trained on a limited data set) but it could lead to biased outcomes once implemented. Programmers, statisticians, and data scientists should identify processes to detect potential risks in the tools they are developing (e.g., tested by humans, reviewed with counterfactual fairness tests). Question: What processes exist for detecting/mitigating bias in tools?
  5. Usage: The insights extracted from advanced analytics algorithms can trigger a real-world action or decision. If the algorithm is inherently biased, a person could make an unintentionally biased decision. Consider this: Black women are four times more likely to die from pregnancy-related issues than white women, and women of color receive more cesarean sections than white women.5 A New York City-based health system recently stopped using a tool designed to predict the likely success of a vaginal birth after cesarean section (VBAC). The algorithm inappropriately assigned women of color a lower success score due to “race correcting” (the act of making causal assumptions and clinical decisions based on race).6 Race and ethnicity questions are no longer included in the survey tool. Question: What processes and procedures are in place to evaluate how insights from analytics are being used to trigger action?
  6. Scaling and continuous feedback: Analytic models should be adaptable. If bias is embedded in a tool or model and deployed at a grand scale, that solution could exacerbate bias in the real world. Moreover, worsened real-life conditions could inform the next generation of data used for new algorithms, creating a toxic cycle of bias. Assumptions and data should be questioned and tested regularly to account for real-life applications that might not have been considered in the original design. Question: Is continued feedback used to account for real-life applications that might not have been considered in the original design and development?

Eliminating bias could improve social and financial bottom lines

Biases in health care advanced analytics can increase mistreatment, undertreatment, and overtreatment. These issues, in turn, can lead to poor health outcomes for consumers and worsened financial performance for hospitals, health systems, health plans, health technology firms, and life sciences companies.

Mitigating bias throughout the advanced analytics lifecycle can help improve social and financial bottom lines for health care organizations. Consider these strategies:

  • Demand robust governance: Establish data guidelines and thresholds and encourage teams to check each other’s assumptions and models for quality and sensitivity. Use robust governance to protect sensitive data and correct advanced analytics that are discovered to be biased. Demand transparency and explainability when collecting data sets and training models.
  • Build diverse teams: Design (or redesign) the organization’s operating model/structure to focus on team diversity. Train data scientists and developers to avoid common technology bias pitfalls. In the absence of diverse teams, organizations that are intentional about activating equity can still risk perpetuating bias. 
  • Apply human-centered design: Leverage human-centered design and testing for accessibility/usability to help ensure equitable deployment across relevant populations. Involve multiple stakeholders—especially historically under-represented patient populations—in advanced analytics design.
  • Audit proactively: Engage subject matter experts and tools to audit advanced analytics and underlying data sets. Commit to a regular cadence of external audits to detect bias and define mitigation tactics when the tool or model is launched. Recurring audits can help account for changes to consumer behaviors and as the algorithm matures.

Biases that exist in advanced analytics typically stem from human designers and the structurally biased health care system in which they operate. What was put in place by humans can only be undone through intentional human intervention throughout the advanced analytics lifecycle. Technology is part of the solution, but intentional human intervention should be the driver.

Acknowledgements: Bonnie Tung, Andrea Schiff, Tom Aiello

green-line-blog-400pxwidth.jpeg

The Deloitte Health Equity Institute
green-line-blog-400pxwidth.jpeg

Endnotes

1. The role of medical schools in propagating physician bias, The New England Journal of Medicine, March 4, 2021

2. Racial bias in pulse oximetry measurement, New England Journal of Medicine, December 17, 2020

3. An algorithmic approach to reducing unexplained pain disparities in underserved populations, Nature Medicine, January 2021

4. Datasheets for Datasets, Cornell University, March 23, 2018

5. Working together to reduce Black maternal mortality, Centers for Disease Control and Prevention, April 9, 2021

6. NYC Health + Hospitals drops use of two race-based clinical assessments, MedCity News, May 18, 2021

Subscribe to the Health Forward blog via email