How artificial intelligence might help health care—or harm it

AI Studio event panelists, from left: Lucila Ohno-Machado, Andy Beam, Milind Tambe, and moderator Carey Goldberg
Panelists, from left: Lucila Ohno-Machado, Andy Beam, Milind Tambe, and moderator Carey Goldberg

May 1, 2024 – Artificial intelligence (AI) in health care can be very beneficial—or very problematic, if we’re not careful about how it’s used, said experts at a Harvard T.H. Chan School of Public Health event.

AI could help with things like providing diagnoses for certain medical conditions or supplementing the work of health organizations, according to panelists who spoke at the in-person, livestreamed event, held April 30 in the Harvard Chan Studio. But it could also provide biased information, or be used to push misinformation, they said.

Speakers included Andy Beam, assistant professor of epidemiology and deputy at Harvard Chan School editor of NEJM AI; Lucila Ohno-Machado, deputy dean for biomedical informatics and chair of biomedical informatics and data science at the Yale University School of Medicine; and Milind Tambe, Gordon McKay Professor of Computer Science and director of the Center for Research on Computation and Society at Harvard University, and principal scientist and director of “AI for Social Good” at Google Research. Carey Goldberg—science, health, and medicine reporter and co-author of “The AI Revolution in Medicine: GPT-4 and Beyond”—was moderator.

On the plus side, AI can provide medical expertise for people who lack access, said Beam. “If you live in rural parts of the country and your nearest physician is three hours away, you can at least get access to a facsimile [of medical expertise] quickly, cheaply, and easily,” he said.

AI may also help speed up diagnoses in the mental health arena, Beam added. For instance, he said, “A person with type 1 bipolar disorder, on average, is undiagnosed for seven years. That can be a very rocky seven-year period. It can manifest to the [person’s] family as [something like] substance abuse, and there’s no clear indication of what’s going on.” Access to AI may lead to a quicker diagnosis and improve quality of the life for the person with that condition, Beam said.

He noted that there have been documented cases of people on “medical odysseys”—those who’ve struggled for years to find a diagnosis for a mysterious medical ailment—who found what they were looking for thanks to AI.

Tambe said that AI can be beneficial in the mobile health arena. For example, a nonprofit he works with in India called ARMMAN runs a mobile health program that delivers automated messages to pregnant women and new mothers, such as reminders to take iron or calcium supplements. AI has been able to help the organization determine which women to focus their interventions on, he said. Tambe also noted that an organization hoping to increase uptake of vaccines might be able to use AI to help determine how best to do so, such as by recommending who to target for interventions like travel vouchers or reminders.

While AI can provide efficiencies, Tambe cautioned that he wouldn’t want it to be used “in a way that eliminates the human touch where it’s absolutely needed.”

That theme—treading carefully when it comes to AI—was echoed by the other panelists. “In terms of diagnosis, if you want hypothesis generation, [AI] can help you,” Ohno-Machado said. “If you’re trusting AI to solely to do the diagnosis, I think we’re not there yet.”

Beam said one of his top concerns about the use of AI in health care, and in general, is misinformation. “We now have open-source [AI] models that are as powerful as GPT-4—the model behind ChatGPT [the most well-known AI system]—and there are essentially no safeguards that would stop a bad actor from using that to spread misinformation,” he said. That means that you could be chatting with someone on the Internet but be unable to tell if that person is real or an AI-generated model aimed at making you believe something false, he said.

Bias is another concern. Training sets—curated data used to train AI models to learn patterns and relationships—can be biased, said Ohno-Machado. “We can try to tweak algorithms [that drive AI], but there is no substitute for having high-quality training data, and quality construction of models.” Beam concurred. “There is bias inherent in the health care system that’s codified in the data, and [AI] automates and operationalizes [that bias].” It’s important to make sure “that we are teaching our models what we actually want them to learn, versus what’s coded in the data,” he said.

The panelists recommended other ways to ensure that AI is used safely and responsibly in health care, such as closely evaluating AI models, having the Food and Drug Administration regulate the models as medical devices, and training people working in the AI realm on how to use it for social good.

For Beam, the best-case scenario for AI in health care would be that, eventually, it operates in the background “so that my life and my interactions with the health care system are more seamless—they’re quicker, they’re cheaper, and they’re better.” He hopes that AI will be able to do things like help systematize the vast amounts of evidence in the medical literature or provide real-time monitoring of air quality conditions—that it will become “something that can pull [all of that] into a cohesive whole and give you concrete, simple-to-follow advice in real time.”

Karen Feldscher

Learn more

Misinformation doesn’t have to get the last word (Harvard Public Health magazine)

New journal, podcast take a closer look at artificial intelligence in medicine (Harvard Chan School news)