What radiographers need to know about explainable artificial intelligence in medical imaging?
Author Block: M. Champendal1, H. Müller2, J. O. Prior1, C. S. D. Reis1; 1Lausanne/CH, 2Sierre/CH
Purpose: Artificial Intelligence/(AI) is seen as a "black box" and health professionals tend to not trust it, at least not fully. This study aimed to present to radiographers the main eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI) allowing them to understand it.
Methods or Background: A review was conducted following JBI methodology, searching PubMed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, Google Scholar for French and English studies post 2017 on explainability and MI modalities. Two reviewers screened abstracts, titles, full texts, resolving disagreements through consensus.
Results or Findings: 1258 results were identified; 228 studies meet all criteria. The terminology used across the articles varied: explainable (n=207), interpretable (n=187), understandable (n=112), transparent (n=61), reliable (n=31), intelligible (n=3) being used interchangeably.
XAI tools applied to MI are mainly intended for MRI, CT and x-ray imaging to explain lung/(Covid) (n=82) and brain/(Alzheimer's; tumours) (n=74) pathologies.
The main formats used to explain the AI tools were visual (n=186), numerical (n=67), rule-based (n=11), textual (n=11). The classification (n=90), prediction (n=47), diagnosis (n=39), detection (n=29), segmentation (n=13) and image quality improvement (n=6) were the main tasks explained.
The most frequently provided explanations were local (78.1%); 5.7% were global, and 16.2% combined both local and global approaches.
Conclusion: The number of XAI publications in MI is increasing, in aid of the classification and prediction of lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are used as equivalent. Future work is required to integrate all stakeholders in the construction of XAI.
Limitations: The focus was solely on recent XAI developments, leading to the exclusion of studies published before 2018, which may have caused other tools that were explored during that period to be excluded.
Funding for this study: No funding was received for this study.
Has your study been approved by an ethics committee? Not applicable
Ethics committee - additional information: No information provided by the submitter.