Study Finds ChatGPT Provides Inaccurate Responses to Drug Questions
A recent study presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting has raised concerns about the accuracy and reliability of ChatGPT, an artificial intelligence program, in providing drug-related information. According to the study led by Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University, nearly 75% of ChatGPT’s responses to drug-related questions were deemed incomplete or inaccurate, with some potentially endangering patients.
The AI program also generated fake citations when asked to provide references. Grossman emphasized the need for caution among healthcare professionals and patients, advising verification of medication-related information from trusted sources instead of relying solely on ChatGPT.
The study, conducted over a 16-month period, challenged the AI system with real questions posed to Long Island University’s College of Pharmacy drug information service. With only 10 out of 39 responses considered satisfactory, the study underscores the importance of thorough evaluation and vigilance in utilizing AI tools for medication-related information. Gina Luchen, PharmD, ASHP director of digital health and data, highlighted the potential impact of AI tools in healthcare while emphasizing the responsibility of pharmacists in ensuring patient safety through the evaluation of tool appropriateness and education on trusted information sources.
Tina Zerilli, PharmD, Associate Professor of Pharmacy Practice at Long Island University, is set to present the full evaluation of ChatGPT’s performance at the meeting. The study serves as a critical reminder of the careful consideration needed when integrating AI into healthcare practices.