Evaluating the Ability of Chatbots to Answer Entrance Exam Questions for Postgraduate Studies in Medical Laboratory Sciences in Iran

Authors

  • Farhad AREFINIA Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
  • Azamossadat HOSSEINI Address 1Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
  • Farkhondeh ASADI Address 1Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
  • Versa OMRANI-NAVA Department of Paramedical sciences, Amol School of Paramedicine sciences, Mazandaran University of Medical Sciences, Sari, Iran
  • Raham NILOOFARI Student Research Committee, School of Paramedical sciences, Mazandaran University of Medical Sciences, Amol, Iran

Keywords:

Chatbots, Medical Laboratory Sciences, multiple-choice questions Exams, Education, Comparative Analysis

Abstract

As educational technology advances, the integration of Artificial Intelligence (AI)-driven chatbots in academic contexts becomes increasingly relevant. This study explored the performance of three advanced chatbots—ChatGPT 3.5, Claude, and Google Bard—in responding to entrance exam questions for Master's and PhD. programs in Medical Laboratory Sciences in Iran in 2023. Multiple-choice questions from entrance exams in Medical Laboratory Sciences Master's and PhD. programs held in 2023 were presented to ChatGPT 3.5, Claude, and Google Bard, and their responses were evaluated. The three chatbots—ChatGPT 3.5, Claude, and Google Bard—exhibited an overall accuracy of 38%, 42%, and 37%, respectively, showcasing a comparable baseline proficiency in addressing a variety of questions. Subject-specific analysis highlighted their strengths and weaknesses in different scientific domains. Our study shows that while the evaluated chatbots showed some ability in answering medical laboratory science questions, their performance remains insufficient for success in postgraduate entrance exams.

Downloads

Published

29.09.2024

How to Cite

1.
AREFINIA F, HOSSEINI A, ASADI F, OMRANI-NAVA V, NILOOFARI R. Evaluating the Ability of Chatbots to Answer Entrance Exam Questions for Postgraduate Studies in Medical Laboratory Sciences in Iran. Appl Med Inform [Internet]. 2024 Sep. 29 [cited 2024 Dec. 21];46(3). Available from: https://ami.info.umfcluj.ro/index.php/AMI/article/view/1055

Issue

Section

Research letters/Short reports