Artificial intelligence (AI) has the potential to drastically improve patient outcomes. AI utilizes algorithms to assess data from the world, make a representation of that data, and use that information to make an inference. From handling administrative tasks to actively diagnosing disease, AI could make treatment faster and more effective in clinical settings, especially as technology continues to improve.
However, AI can suffer from bias, which has striking implications for health care. The term “algorithmic bias” speaks to this problem. It was first defined by the co-directors of the Applied Artificial Intelligence for Health Care program at the Harvard T.H. Chan School of Public Health: Trishan Panch, primary care physician, president-elect of the HSPH Alumni Association, and co-founder of digital health company Wellframe, and Heather Mattie, lecturer of biostatistics and co-director of the health data science master’s program.
In their 2019 paper in Journal of Global Health “Artificial intelligence and algorithmic bias: implications for health systems,” Panch, Mattie, and Rifat Atun define algorithmic bias as the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems.
In other words, algorithms in health care technology don’t simply reflect back social inequities but may ultimately exacerbate them. What does this mean in practice, how does it manifest, and how can it be counteracted?
How Does Algorithmic Bias in Health Care Happen — and Why Is It so Damaging to Patients?
Algorithmic bias is not a new problem and is not specific to AI. In fact, an algorithm is merely a series of steps—a recipe and an exercise plan are as much of an algorithm as a complex model. At the core of any health system challenge, including algorithmic bias, lies a question of values: what health care outcomes are societally important and why? How much money should go towards health care, and who should benefit from improved outcomes? “It’s as much an issue of society as it is about algorithms,” says Panch.
“If you look at algorithmic bias as just a technical issue, it will beget engineering solutions — how can you restrict certain fields such as race or gender from the data, for example. But that won’t really solve the problem alone. If the world looks a certain way, that will be reflected in the data, either directly or through proxies, and thus in the decisions.”
In fact, it’s been demonstrated that simple prediction rules for heart disease, which were used in routine medical practice in industrialized countries for decades, were biased. The Framingham Heart Study cardiovascular risk score performed very well for Caucasian but not African American patients, which means that care could be unequally distributed and inaccurate. In the field of genomics and genetics, it’s estimated that Caucasians make up about 80 percent of collected data, and thus studies may be more applicable for that group than for other, underrepresented groups.
Brian Powers, a faculty member in Applied Artificial Intelligence for Health Care, wrote a landmark 2019 paper in Science that showed that algorithms commonly used by some prominent health systems today are racially biased. Health care professionals use this information to recommend certain people for medical care, thus this has direct and potentially harmful implications for patients.
“There will probably always be some amount of bias, because the inequities that underpin bias are in society already and influence who gets the chance to build algorithms and for what purpose. It will require normative action and collaboration between the private sector, government, academia, and civil society.”
What Can Data Science Teams Do to Prevent and Mitigate Algorithmic Bias in Health Care?
According to Mattie, “Bias can creep into the process anywhere in creating algorithms: from the very beginning with study design and data collection, data entry and cleaning, algorithm and model choice, and implementation and dissemination of the results.” Bias has a trickle-down effect and must be addressed at every step of the process.
Therefore, combating algorithmic bias means that data science teams should include professionals from a diversity of backgrounds and perspectives, not simply data scientists who have a technical understanding of AI. In the Journal of Global Health paper, Panch and Mattie suggested that clinicians should be part of these teams, as they can provide a deep understanding of the clinical context that will improve modeling.
“There’s a tradeoff between performance in algorithms and bias,” says Panch. “There will probably always be some amount of bias, because the inequities that underpin bias are in society already and influence who gets the chance to build algorithms and for what purpose. It will require normative action and collaboration between the private sector, government, academia, and civil society.”
That will take time. In the interim, it’s necessary to be mindful about certain groups that are disadvantaged and work to “protect” them so that they receive greater care, often by setting an artificial standard in the algorithm that overemphasizes these groups and de-emphasizes others. This is technically difficult and as yet unproven, though there is pioneering research work in this area.
“You take a hit in overall accuracy to bump up accuracy across groups of people. That’s where a lot of bias creeps in, because people try to get overall accuracy as high as possible. But if it’s only equitable for certain groups, it causes problems down the line,” says Mattie.
This further emphasizes that data collection, analysis, and study require broader, more diverse teams. “Having as many eyes and evaluations on the process is a really good start. And ultimately there could also be checklists or safeguards along the way,” says Mattie.
“Bias can creep into the process anywhere in creating algorithms: from the very beginning with study design and data collection, data entry and cleaning, algorithm and model choice, and implementation and dissemination of the results.”
What Can Minimize Large-Scale Algorithmic Bias and Effectively Harness the Power of AI?
Today, more health professionals are at least aware of algorithmic bias. Many companies are also taking proactive steps to promote diversity, equity, and inclusion (DEI) in their teams. “Without this work, it is impossible to address implicit and explicit bias in the people that develop algorithms and the data generating processes they study,” says Panch.
There are two approaches that currently attempt to combat algorithmic bias in health systems on an industry-wide scale:
- Calibrating incentives: If researchers or other professionals can prove that data analysis is biased, they can utilize legislation via class action lawsuits. This will incentivize private companies to change, or to preemptively look at bias before this occurs.
- Formal legislation: Legal measures are not as advanced yet. Current legislation protects certain variables by removing fields that would lead to unfair judgment, like race, gender, socioeconomic background, and disability. But such factors are required in these health care algorithms so that these groups may receive proper care. Current legislation does not yet account for this.
Furthermore, researchers continue to refine this process of algorithm development in order to not just optimize performance but also minimize bias. Ideally, a system of checks and balances will ultimately help to minimize errors more regularly and ensure the sustainability of health gains over time.
“There’s no silver bullet for any of this,” says Mattie. “But you can take steps to minimize bias as much as possible.”
Harvard T.H. Chan School of Public Health offers Applied Artificial Intelligence for Health Care, an online program that explores the fundamental concepts of AI in health care and how it can support your organization’s strategy and serve your patients.