The explainability of Deep Neural Networks (DNNs) has recently gained significant importance especially in safety-critical applications such as automated/autonomous vehicles, a.k.a. automated driving systems. CounterFactual (CF) explanations have emerged as a promising approach for interpreting the behaviour of black-box DNNs. A CF explainer identifies the minimum modifications in the input that would alter the model's output to its complement. In other words, it computes the minimum modifications required to cross the model's decision boundary. Current deep generative CF models often work with user-selected features rather than focusing on the discriminative features of the black-box model. Consequently, such CF examples may not necessarily lie near the decision boundary, thereby contradicting the definition of CFs. To address this issue, we propose in this paper a novel approach that leverages saliency maps to generate more informative CF explanations. Our approach guides a Generative Adversarial Network based on the most influential features of the input of the black-box model to produce CFs near the decision boundary. We evaluate the performance using a real-world dataset of driving scenes, BDD100k, and demonstrate its superiority over several baseline methods in terms of well-known CF metrics, including proximity, sparsity and validity. Our work contributes to the ongoing efforts to improve the interpretability of DNNs and provides a promising direction for generating more accurate and informative CF explanations. The source codes are available at: https://github.com/Amir-Samadi//Saliency_Aware_CF.
SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems
2023-09-24
2945625 byte
Conference paper
Electronic Resource
English
COUNTERFACTUAL EXPLANATIONS FOR DATA-DRIVEN DECISIONS
TIBKAT | 2020
|Textual Explanations for Automated Commentary Driving
IEEE | 2023
|