Explainable AI for Medical Imaging: Knowledge Matters - Université d'Évry Access content directly
Book Sections Year : 2021

Explainable AI for Medical Imaging: Knowledge Matters

Abstract

Cooperation between medical experts and virtual assistance depends on trust. Over recent years, machine learning algorithms have been able to construct models of high accuracy and predictive power. Yet as opposed to their earlier, hypothesis-driven counterparts, current data-driven models are increasingly criticized for their opaque decision-making process. Safety-critical applications such as self-driving cars or health status estimation cannot rely on benchmark-winning black-box models. They need prediction models which rationale and logic can be explained in an understandable, human-readable format, not just out of curiosity but also to highlight and deter potential biases. In this chapter we discuss how Explainable Artificial Intelligence (XAI) assesses such issues in medical imaging. We will also put focus on machine learning approaches developed for breast cancer diagnosis, and discuss the advent of deep learning in this particular domain. Indeed, despite promising results achieved over the last few years, advanced state of the art analysis identifies several important challenges faced by deep learning approaches. We will present the emerging trends and proposals to overcome these challenges.
No file

Dates and versions

hal-03612280 , version 1 (17-03-2022)

Identifiers

Cite

Pascal Bourdon, Olfa Ben Ahmed, Thierry Urruty, Khalifa Djemal, Christine Fernandez-Maloigne. Explainable AI for Medical Imaging: Knowledge Matters. Multi-faceted Deep Learning, Springer International Publishing, pp.267--292, 2021, ⟨10.1007/978-3-030-74478-6_11⟩. ⟨hal-03612280⟩
78 View
0 Download

Altmetric

Share

Gmail Facebook X LinkedIn More