Opening the Black-Box: Extracting Medical Reasoning from Machine Learning Predictions

Authors

  • Marius FERSIGAN Iuliu Haţieganu University of Medicine and Pharmacy
  • Marius MĂRUȘTERI University of Medicine, Pharmacy, Science and Technology of Târgu Mureş

Keywords:

Explainable AI, Machine Learning, Interpretability, Explainability, Transparency

Abstract

Background: A transparent machine learning model enables the end-user to investigate the decision process from (clinical) input data to the final prediction of the model. Modern (Deep Learning) machine learning models are highly complex and opaque - lacking the transparency required by both the clinician and the patient. Not knowing why a model predicted a particular outcome in a clinical context would likely hinder the clinician's trust in the model and ultimately impact the patient's health outcome. Aim: To integrate the model training into a unique automated pipeline that will output the prediction along with the relevant explanation. Materials and Methods: The workflow was build using the Julia programming language and employing a wide range of machine learning packages from the MLJ universe. The interoperability with the explainer packages from the Python ecosystem was enabled using the PyCall.jl package. Results: An automated machine learning pipeline that a) exposes the (clinical) decision process inside various black-box models (convolutional neural networks, random forest, boosted models), b) enables the training of explainable models without compromising the performance metrics, c) validates the resulted method against the medical domain-knowledge. Our pipeline combines the known non-healthcare-specific explainers - LIME, SHAP, ShapML, and InterpretML into a global explainer tailored to healthcare-specific data. Conclusions: The performance metrics of machine learning models trained on healthcare data are not necessarily sacrificed on the altar of transparency and interpretability. Using model-agnostic and model-specific explainers, we can satisfy the clinician's need for a transparent decision and diagnosis process and the superior performance metrics that ensure the predictive power of the model.

Downloads

Published

05.09.2021

How to Cite

1.
FERSIGAN M, MĂRUȘTERI M. Opening the Black-Box: Extracting Medical Reasoning from Machine Learning Predictions. Appl Med Inform [Internet]. 2021 Sep. 5 [cited 2024 Nov. 24];43(Suppl. S1):20. Available from: https://ami.info.umfcluj.ro/index.php/AMI/article/view/843

Issue

Section

Special Issue - RoMedINF