Skip to Main content Skip to Navigation
Book sections

Explainable AI for Medical Imaging: Knowledge Matters

Abstract : Cooperation between medical experts and virtual assistance depends on trust. Over recent years, machine learning algorithms have been able to construct models of high accuracy and predictive power. Yet as opposed to their earlier, hypothesis-driven counterparts, current data-driven models are increasingly criticized for their opaque decision-making process. Safety-critical applications such as self-driving cars or health status estimation cannot rely on benchmark-winning black-box models. They need prediction models which rationale and logic can be explained in an understandable, human-readable format, not just out of curiosity but also to highlight and deter potential biases. In this chapter we discuss how Explainable Artificial Intelligence (XAI) assesses such issues in medical imaging. We will also put focus on machine learning approaches developed for breast cancer diagnosis, and discuss the advent of deep learning in this particular domain. Indeed, despite promising results achieved over the last few years, advanced state of the art analysis identifies several important challenges faced by deep learning approaches. We will present the emerging trends and proposals to overcome these challenges.
Document type :
Book sections
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03612280
Contributor : Pascal Bourdon Connect in order to contact the contributor
Submitted on : Thursday, March 17, 2022 - 3:59:27 PM
Last modification on : Sunday, July 24, 2022 - 3:43:21 AM

Identifiers

Citation

Pascal Bourdon, Olfa Ben Ahmed, Thierry Urruty, Khalifa Djemal, Christine Fernandez-Maloigne. Explainable AI for Medical Imaging: Knowledge Matters. Multi-faceted Deep Learning, Springer International Publishing, pp.267--292, 2021, ⟨10.1007/978-3-030-74478-6_11⟩. ⟨hal-03612280⟩

Share

Metrics

Record views

9