Embedding an optimization process has been explored for imposing efficient and flexible policy structures. Existing work often build upon nonlinear optimization with explicitly unrolling of iteration steps, making policy inference prohibitively expensive for online learning and real-time control. Our approach embeds a linear-quadratic-regulator (LQR) formulation with a Koopman representation, thus exhibiting the tractability from a closed-form solution and richness from a non-convex neural network. We use a few auxiliary objectives and reparameterization to enforce optimality conditions of the policy that can be easily integrated to standard gradient-based learning. Our approach is shown to be effective for learning policies rendering an optimality structure and efficient reinforcement learning, including simulated pendulum control, 2D and 3D walking, and manipulation for both rigid and deformable objects. We also demonstrate real world application in a robot pivoting task. ; In submission for L4DC 2022 QC 20211215


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Policy Learning with Embedded Koopman Optimal Control


    Beteiligte:
    Yin, Hang (Autor:in) / Welle, Michael C. (Autor:in) / Kragic, Danica (Autor:in)

    Medientyp :

    Paper


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    A Data-Driven Nonlinear Optimal Control Using Koopman Operator on Hamiltonian Flow

    Sato, Kyosuke / Bando, Mai / Hokamoto, Shinji | TIBKAT | 2023




    Koopman-Operator Control Optimization for Relative Motion in Space

    Servadio, Simone / Armellin, Roberto / Linares, Richard | AIAA | 2023


    Koopman-Operator-Based Attitude Dynamics and Control on SO(3)

    Chen, Ti / Shan, Jinjun / Wen, Hao | Springer Verlag | 2022