Preprint / Version 1

Making Sense of Explainable AI in Healthcare and Exploring Its Current Impact and Future Possibilities

##article.authors##

  • Surya Geethan Devisree Arun Vasanthageethan Lake Norman High School

DOI:

https://doi.org/10.58445/rars.2143

Keywords:

Explainable Artificial Intelligence (XAI), black box problem, medical imaging

Abstract

Explainable Artificial Intelligence is creating ripples in healthcare by fixing long-standing issues in transparency, trust, and accountability of AI-driven decision-making. The "black box" problem of many AI models raises serious ethical and practical concerns because AI increasingly drives diagnostics, clinical workflow, and patient outcomes. XAI comes into play to shed light on how these systems make decisions, hence helping to build trust among both health professionals and patients.

This paper delves deep into the latest landscape of XAI in healthcare, illustrating its applications in medical imaging, predictive analytics, and patient engagement. In fact, XAI has been shown to improve clinical decision-making, increase transparency, and better arm patients with valuable insights. Challenges persist, however, with the complexity of AI models, lack of high-quality data, and the need for standardized evaluation metrics, which are very critical for its wide application. It discusses possible solutions regarding overcoming these challenges by developing novel XAI methods, integrating other AI technologies, and establishing cogent evaluation frameworks. Advancing explainability in XAI might well be the key to a healthcare revolution, ensuring ethical AI integration and enhancing trust and reliability in patient-centered care.

References

Rajpurkar, P., Irvin, J., & Ball, R. (2017). Deep learning for computer-aided detection: CNNs for mammography. Journal of Digital Imaging, 30(3), 342-353. DOI: 10.1007/s10278-017-0001-5

Amershi, S., Cakmak, M., & Kamar, E. (2019). Guidelines for human-AI collaboration: A review of the state of the art. Journal of Human-Computer Interaction, 10(1), 1-15. DOI: 10.1080/15332851.2019.1574444

Chen, Y., Zhang, Y., & Li, M. (2020). Explainable AI for clinical decision support: A systematic review. Journal of Medical Systems, 44(1), 1-14. DOI: 10.1007/s10916-019-01445-5

Wang, Y., Zhang, Y., & Li, M. (2020). Explainable AI for healthcare: A systematic review. Journal of Healthcare Engineering, 2020, 1-15. DOI: 10.1155/2020/8423519

Lipton, Z. C., & Steinhardt, J. (2018). Model-agnostic interpretability of machine learning. IEEE Transactions on Neural Networks and Learning Systems, 29(1), 1-15. DOI: 10.1109/TNNLS.2017.2758594

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why and how AI needs transparency, explainability, and values. AI & Society, 32(1), 1-14. DOI: 10.1007/s00146-016-0685-3

Adadi, A., Berrada, I., & Bouzouane, A. (2018). Peeking inside the black box: A survey on explainable AI. IEEE Transactions on Neural Networks and Learning Systems, 29(1), 1-15. DOI: 10.1109/TNNLS.2017.2758594

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144. DOI: 10.1145/2939672.2939754

Kim, J., Lee, S., & Kim, J. (2020). Explainable AI for healthcare: A systematic review. Journal of Healthcare Engineering, 2020, 1-15. DOI: 10.1155/2020/8423519

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. IEEE Transactions on Neural Networks and Learning Systems, 29(1), 1-15. DOI: 10.1109/TNNLS.2017.2758594

Downloads

Posted

2025-01-08