ChatGPT could score at or around the approximately 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that made coherent, internal sense and contained frequent insights, according to a new study.
Tiffany Kung and colleagues at AnsibleHealth, California, US, tested ChatGPT‘s performance on the USMLE, a highly standardised and regulated series of three exams, including Steps 1, 2CK, and 3, required for medical licensure in the US, the study said.
Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.
After screening to remove image-based questions from the USMLE, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release, the study said.
The authors found that after indeterminate responses were removed, ChatGPT had scored between 52.4 percent and 75 percent across the three USMLE exams, the study published in the journal PLOS Digital Health said.
The passing threshold each year is approximately 60 percent.
ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming word sequences.
Unlike most chatbots, ChatGPT cannot search the internet, the study said.
Instead, it generates text using word relationships predicted by its internal processes, the study said.
According to the study, ChatGPT also demonstrated 94.6 percent concordance across all its responses and produced at least one significant insight, something that was new, non-obvious, and clinically valid, for 88.9 percent of its responses.
ChatGPT also exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8 percent on an older dataset of USMLE-style questions, the study said.
While the relatively small input size restricted the depth and range of analyses, the authors noted that their findings provided a glimpse into ChatGPT’s potential to enhance medical education, and eventually, clinical practice.
For example, they added, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.
“Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation,” said the authors.
Kung added that ChatGPT’s role in this research went beyond being the study subject.
“ChatGPT contributed substantially to the writing of [our] manuscript… We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress… All of the co-authors valued ChatGPT’s input.”