Constellation, le dépôt institutionnel de l'Université du Québec à Chicoutimi

Designing an interpretability-based model to explain the artificial intelligence algorithms in healthcare

Ennab Mohammad et Mcheick Hamid. (2022). Designing an interpretability-based model to explain the artificial intelligence algorithms in healthcare. Diagnostics, 12, (7), e1557.

[thumbnail of diagnostics-12-01557-v2.pdf]
Prévisualisation
PDF - Version publiée
Disponible sous licence Creative Commons (CC-BY 2.5).

1MB

URL officielle: https://dx.doi.org/doi:10.3390/diagnostics12071557

Résumé

The lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a consequent reduction in the quality of the predictive results of the models. On the other hand, the existence of interpretability in the predictions of AI models will facilitate the understanding and trust of the clinicians in these complex models. The data protection regulations worldwide emphasize the relevance of the plausibility and verifiability of AI models’ predictions. In response and to take a role in tackling this challenge, we designed the interpretability-based model with algorithms that achieve human-like reasoning abilities through statistical analysis of the datasets by calculating the relative weights of the variables of the features from the medical images and the patient symptoms. The relative weights represented the importance of the variables in predictive decision-making. In addition, the relative weights were used to find the positive and negative probabilities of having the disease, which indicated high fidelity explanations. Hence, the primary goal of our model is to shed light and give insights into the prediction process of the models, as well as to explain how the model predictions have resulted. Consequently, our model contributes by demonstrating accuracy. Furthermore, two experiments on COVID-19 datasets demonstrated the effectiveness and interpretability of the new model.

Type de document:Article publié dans une revue avec comité d'évaluation
ISSN:2075-4418
Volume:12
Numéro:7
Pages:e1557
Version évaluée par les pairs:Oui
Date:2022
Identifiant unique:10.3390/diagnostics12071557
Sujets:Sciences naturelles et génie > Sciences mathématiques > Informatique
Sciences de la santé > Sciences médicales
Département, module, service et unité de recherche:Départements et modules > Département d'informatique et de mathématique
Mots-clés:interpretability, artificial intelligence, relative weights, probability
Déposé le:29 sept. 2022 20:37
Dernière modification:29 sept. 2022 20:37
Afficher les statistiques de telechargements

Éditer le document (administrateurs uniquement)

Creative Commons LicenseSauf indication contraire, les documents archivés dans Constellation sont rendus disponibles selon les termes de la licence Creative Commons "Paternité, pas d'utilisation commerciale, pas de modification" 2.5 Canada.

Bibliothèque Paul-Émile-Boulet, UQAC
555, boulevard de l'Université
Chicoutimi (Québec)  CANADA G7H 2B1
418 545-5011, poste 5630