Explainable AI in medical imaging: An overview for clinical practitioners – Beyond saliency-based XAI approaches

Abstract

Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method’s output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.

Publication
European Journal of Radiology
Katarzyna Borys
Katarzyna Borys
Data Science

My research interests include Deep Learning, Computer Vision, Radiomics, and Explainable AI.

Felix Nensa
Felix Nensa
Lead

My research interests include medical digitalization, computer vision and radiology.