Deep reinforcement learning (DRL) has been successfully used to solve various robotic manipulation tasks. However, most of the existing works do not address the issue of control stability. This is in sharp contrast to the control theory community where the well-established norm is to prove stability whenever a control law is synthesized. What makes traditional stability analysis difficult for DRL are the uninterpretable nature of the neural network policies and unknown system dynamics. In this work, unconditional stability is obtained by deriving an interpretable deep policy structure based on the energy shaping control of Lagrangian systems. Then, stability during physical interaction with an unknown environment is established based on passivity. The result is a stability guaranteeing DRL in a model-free framework that is general enough for contact-rich manipulation tasks. With an experiment on a peg-in-hole task, we demonstrate, to the best of our knowledge, the first DRL with stability guarantee on a real robotic manipulator. ; QC 20210818


    Access

    Download


    Export, share and cite



    Title :

    Learning Deep Neural Policies with Stability Guarantees


    Contributors:

    Type of media :

    Paper


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    A game-based approximate verification of deep neural networks with provable guarantees

    Wu, M / Wicker, M / Ruan, W et al. | BASE | 2019

    Free access


    Active flow control optimization with stability guarantees

    Repolho Cagliari, Luiz Victor R. / Babcock, Tucker / Hicken, Jason E. et al. | AIAA | 2022



    Interactive Learning with Corrective Feedback for Policies Based on Deep Neural Networks

    Pérez-Dattari, Rodrigo / Celemin, Carlos / Ruiz-del-Solar, Javier et al. | TIBKAT | 2020