Using Explainable Artificial Intelligence to Locate Pneumonia
DOI:
https://doi.org/10.58445/rars.1724Keywords:
computer science, AI, PneumoniaAbstract
Artificial intelligence (AI) has already become a vital resource in numerous industries; however, it is often challenging to understand how AI reaches its results. This lack of transparency, combined with potential biases within the machine learning model, prevents professionals in critical fields like healthcare from relying on deep learning models for diagnostic purposes, hindering the widespread use of AI in healthcare. This paper investigates the application of XAI in the medical field, focusing on the detection of pneumonia through the analysis of lung X-ray scans. In this study, we developed an XAI tool, utilizing a Convolutional Neural Network (CNN) constructed with PyTorch and trained on the Pneumonia MNIST dataset. Our model achieves an accuracy of nearly 91% on 28x28 pixel images and highlights the top pixels considered most important by the deep learning model in its decision-making process. The primary aim of this project is to present a proof-of-concept tool for the integration of XAI into healthcare diagnostics, with the goal of assisting medical professionals in making informed decisions and ultimately saving lives. By demonstrating the feasibility and effectiveness of XAI in pneumonia detection, we lay the groundwork for future advancements in healthcare AI, emphasizing the importance of transparency and reliability in AI models.
References
J. M. Durán, K. R. Jongsma. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics. (2021). doi: 10.1136/medethics-2020-106820. Epub ahead of print. PMID: 33737318.
Gramegna, P. Giudici. SHAP and LIME: An evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence. 1-6 (2021).
Khan, H. Fatima, A. Qureshi, S. Kumar, A. Hanan, J. Hussain, S. Abdullah. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomedical Materials & Devices, (2023).
Kiseleva, D. Kotzinos, P. De Hert. Transparency of AI in healthcare as a multilayered system of accountabilities: Between legal requirements and technical limitations. Frontiers in Artificial Intelligence. 5 (2022).
Rai. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48, 137–141 (2019).
M. T. Ribeiro, S. Singh, C. Guestrin. “Why Should I Trust You?”: Explaining the predictions of any classifier. arXiv:1602.04938 (2016).
Downloads
Posted
Categories
License
Copyright (c) 2024 Anthony Novokshanov
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.