Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example can fool the well trained neural networks by adding barely imperceptible perturbations to clean data. In this paper, we explore the mechanism of adversarial examples and adversarial robustness from the perspective of statistical mechanics, and propose an statistical mechanics-based interpretation model of adversarial robustness. The state transition caused by adversarial training based on the theory of fluctuation dissipation disequilibrium in statistical mechanics is formally constructed. Besides, we fully study the adversarial example attacks and training process on system robustness, including the influence of different training processes on network robustness. Our work is helpful to understand and explain the adversarial examples problems and improve the robustness of deep learning-based auto-driving systems.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Interpreting Adversarial Examples and Robustness for Deep Learning-Based Auto-Driving Systems


    Contributors:


    Publication date :

    2022-07-01


    Size :

    1788880 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    CERTIFIED ADVERSARIAL ROBUSTNESS FOR DEEP REINFORCEMENT LEARNING

    LUETJENS BJOERN MALTE / EVERETT MICHAEL F / HOW JONATHAN P et al. | European Patent Office | 2021

    Free access


    On Adversarial Robustness of Semantic Segmentation Models for Automated Driving

    Yin, Huilin / Wang, Ruining / Liu, Boyu et al. | IEEE | 2022


    Probabilistic Robustness Analysis: Examples

    Wojtkiewicz, Steven | AIAA | 2002


    Robustness of reinforcement learning based autonomous driving technologies

    Hart, Fabian / Technische Universität Dresden | SLUB | 2024