Evaluating the Ability of Chatbots to Answer Entrance Exam Questions for Postgraduate Studies in Medical Laboratory Sciences in Iran
Keywords:
Chatbots, Medical Laboratory Sciences, multiple-choice questions Exams, Education, Comparative AnalysisAbstract
As educational technology advances, the integration of Artificial Intelligence (AI)-driven chatbots in academic contexts becomes increasingly relevant. This study explored the performance of three advanced chatbots—ChatGPT 3.5, Claude, and Google Bard—in responding to entrance exam questions for Master's and PhD. programs in Medical Laboratory Sciences in Iran in 2023. Multiple-choice questions from entrance exams in Medical Laboratory Sciences Master's and PhD. programs held in 2023 were presented to ChatGPT 3.5, Claude, and Google Bard, and their responses were evaluated. The three chatbots—ChatGPT 3.5, Claude, and Google Bard—exhibited an overall accuracy of 38%, 42%, and 37%, respectively, showcasing a comparable baseline proficiency in addressing a variety of questions. Subject-specific analysis highlighted their strengths and weaknesses in different scientific domains. Our study shows that while the evaluated chatbots showed some ability in answering medical laboratory science questions, their performance remains insufficient for success in postgraduate entrance exams.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Farhad AREFINIA, Azamossadat HOSSEINI, Farkhondeh ASADI, Versa OMRANI-NAVA, Raham NILOOFARI
This work is licensed under a Creative Commons Attribution 4.0 International License.
All papers published in Applied Medical Informatics are licensed under a Creative Commons Attribution (CC BY 4.0) International License.