Deep reinforcement learning (DRL) has been successfully used to solve various robotic manipulation tasks. However, most of the existing works do not address the issue of control stability. This is in sharp contrast to the control theory community where the well-established norm is to prove stability whenever a control law is synthesized. What makes traditional stability analysis difficult for DRL are the uninterpretable nature of the neural network policies and unknown system dynamics. In this work, unconditional stability is obtained by deriving an interpretable deep policy structure based on the energy shaping control of Lagrangian systems. Then, stability during physical interaction with an unknown environment is established based on passivity. The result is a stability guaranteeing DRL in a model-free framework that is general enough for contact-rich manipulation tasks. With an experiment on a peg-in-hole task, we demonstrate, to the best of our knowledge, the first DRL with stability guarantee on a real robotic manipulator. ; QC 20210818


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Learning Deep Neural Policies with Stability Guarantees


    Beteiligte:
    Abdul Khader, Shahbaz (Autor:in) / Yin, Hang (Autor:in) / Falco, Pietro (Autor:in) / Kragic, Danica (Autor:in)

    Medientyp :

    Paper


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    A game-based approximate verification of deep neural networks with provable guarantees

    Wu, M / Wicker, M / Ruan, W et al. | BASE | 2019

    Freier Zugriff



    Active flow control optimization with stability guarantees

    Repolho Cagliari, Luiz Victor R. / Babcock, Tucker / Hicken, Jason E. et al. | AIAA | 2022


    Interactive Learning with Corrective Feedback for Policies Based on Deep Neural Networks

    Pérez-Dattari, Rodrigo / Celemin, Carlos / Ruiz-del-Solar, Javier et al. | Springer Verlag | 2020