In a new commentary published in the Journal of the American Medical Association (JAMA), University of Maryland School of Medicine faculty warn against the use of AI-driven software tools and other large language models to summarize patient medical data without proper review and oversight from the US Food and Drug Administration (FDA). Without proper regulation, the commentary authors say these new tools could lead to biased decision-making and misdiagnosis. That could lead to severe harm to patients.

Katherine Goodman, PhD, JD, Assistant Professor of Epidemiology and Public Health at UMSOM, Core Investigator at the University of Maryland Institute for Health Computing (UM-IHC) and lead author of the opinion, along with her colleagues point out that there are currently “no comprehensive standards for large language model-generated clinical summaries beyond the general recognition that summaries should be consistently accurate and concise.” They add that the FDA’s “final guidance for clinical decision support software - published two months before ChatGPT’s release - provides an unintentional ‘roadmap’ for how large language models can avoid FDA medical device regulation”.

Dr. Goodman and Daniel Morgan, MD, MS, Professor of Epidemiology and Public Health and senior author on the commentary, are available for interviews to discuss concerns about how AI software tools could lead to narrative errors and bias in a patient’s electronic health record and recommendations to improve these tools.

To request an interview, please contact UMSOM media relations.

Full commentary can be found here

 

MEDIA CONTACTS:

Holly Moody-Porter

Senior Media & Public Relations Specialist

University of Maryland School of Medicine

 

Deborah Kotz

Senior Director of Media Relations

University of Maryland School of Medicine