Understanding the decision-making process of deep learning networks is a key challenge that has rarely been investigated for synthetic aperture radar (SAR) images. In this article, a set of new analytical tools is proposed and applied to a convolutional neural network (CNN) handling automatic target recognition on two SAR datasets containing military targets. First, an analysis of the respective influence of target, shadow, and background areas on classification performance is carried out. The shadow appears to be the least used portion of the image affecting the decision process, compared to the target and clutter, respectively. Second, the location of the most influential features is determined with classification maps obtained by systematically hiding specific target parts and registering the associated classification rate relative to the images to be classified. The location of the image areas without which classification fails is target type and orientation specific. Nonetheless, a strong contribution of specific parts of the target, such as the target top and the areas facing the radar, is noticed. Finally, results show that features are increasingly activated along the CNN depth according to the target type and its orientation, even though target orientation is absent from the loss function.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Explainability of Deep SAR ATR Through Feature Analysis


    Contributors:


    Publication date :

    2021-02-01


    Size :

    4349159 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Explainability of Deep Reinforcement Learning Method with Drones

    Cetin, Ender / Barrado, Cristina / Pastor, Enric | IEEE | 2023


    Driving Style Classification Using Deep Temporal Clustering with Enhanced Explainability

    Feng, Yuxiang / Ye, Qiming / Adan, Fahmy et al. | IEEE | 2023


    Building Trust to AI Systems Through Explainability. Technical and legal perspectives

    Nalepa, Grzegorz J. / Araszkiewicz, Michal / Nowaczyk, Slawomir et al. | TIBKAT | 2019

    Free access

    Explainability of autonomous vehicle decision making

    WRAY KYLE HOLLINS / BENTAHAR OMAR / VAGADIA ASTHA et al. | European Patent Office | 2023

    Free access

    Explainability of Autonomous Vehicle Decision Making

    WRAY KYLE HOLLINS / BENTAHAR OMAR / VAGADIA ASTHA et al. | European Patent Office | 2021

    Free access