The explainability of Deep Neural Networks (DNNs) has recently gained significant importance especially in safety-critical applications such as automated/autonomous vehicles, a.k.a. automated driving systems. CounterFactual (CF) explanations have emerged as a promising approach for interpreting the behaviour of black-box DNNs. A CF explainer identifies the minimum modifications in the input that would alter the model's output to its complement. In other words, it computes the minimum modifications required to cross the model's decision boundary. Current deep generative CF models often work with user-selected features rather than focusing on the discriminative features of the black-box model. Consequently, such CF examples may not necessarily lie near the decision boundary, thereby contradicting the definition of CFs. To address this issue, we propose in this paper a novel approach that leverages saliency maps to generate more informative CF explanations. Our approach guides a Generative Adversarial Network based on the most influential features of the input of the black-box model to produce CFs near the decision boundary. We evaluate the performance using a real-world dataset of driving scenes, BDD100k, and demonstrate its superiority over several baseline methods in terms of well-known CF metrics, including proximity, sparsity and validity. Our work contributes to the ongoing efforts to improve the interpretability of DNNs and provides a promising direction for generating more accurate and informative CF explanations. The source codes are available at: https://github.com/Amir-Samadi//Saliency_Aware_CF.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems


    Beteiligte:
    Samadi, Amir (Autor:in) / Shirian, Amir (Autor:in) / Koufos, Konstantinos (Autor:in) / Debattista, Kurt (Autor:in) / Dianati, Mehrdad (Autor:in)


    Erscheinungsdatum :

    2023-09-24


    Format / Umfang :

    2945625 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Sparse Visual Counterfactual Explanations in Image Space

    Boreiko, Valentyn / Augustin, Maximilian / Croce, Francesco et al. | British Library Conference Proceedings | 2022


    Textual Explanations for Automated Commentary Driving

    Kuhn, Marc Alexander / Omeiza, Daniel / Kunze, Lars | IEEE | 2023



    Safe Halt as Fail-safe Concept for Automated Driving Systems

    Ackermann, Stefan Martin | DataCite | 2023

    Freier Zugriff

    CONTEXT-AWARE NAVIGATION PROTOCOL FOR SAFE DRIVING

    JEONG JAE HOON / MUGABARIGIRA BIEN AIME / SHEN YIWEN | Europäisches Patentamt | 2021

    Freier Zugriff