학위논문(박사) -- 서울대학교대학원 : 공과대학 기계항공공학부, 2023. 8. 김유단. ; A model-free off-policy reinforcement learning algorithm is proposed for solving optimal control problems for dynamic systems. The algorithm is designed to converge to not only the optimal but also stabilizing policy, which is one of the most critical concerns in designing the controller for safety-critical systems such as unmanned aerial vehicles. Unlike typical approximate dynamic programming methods, an initial stabilizing policy is not required by the proposed algorithm, which is a key advantage. In the first part of the dissertation, a data-driven surrogate Q-leaning algorithm is proposed for linear systems based on the extended Kleinman iteration that solves algebraic Riccati equation. To allow an initial unstable policy, the value function is redefined implicitly to evaluate the performance index of the unstable policy. Based on this implicit value function, an action-value function called the surrogate Q-function is proposed by augmenting virtual control dynamics in the state space to properly define values of state and control input pairs. An off-policy data-driven method called the surrogate Q-learning is then provided based on the surrogate Q-function, which enables the reuse of data obtained from an arbitrary control sources, e.g., trained human experts or fine-tuned PID controllers. The convergence of the extended Kleinman iteration to the unique positive definite solution, which yields the optimal stabilizing policy, is proven based on matrix inertia theory since the surrogate Q-learning is equivalent to the extended Kleinman iteration. The second part of the dissertation is devoted to an application of the proposed reinforcement learning algorithm to nonlinear systems. The Koopman operator theory is employed to linearize nonlinear systems in an infinite-dimensional space, called the Koopman lifting linearization. The controllability and observability of linearized systems are investigated by assuming that there exists a finite-dimensional ...


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Data-Driven Optimal Control for Linear Systems with Arbitrary Initial Policy and Application to Nonlinear Systems Using Koopman Operators ; 임의의 초기 정책에 대한 선형 시스템의 데이터 기반 최적 제어 및 쿠프만 연산자를 활용한 비선형 시스템에 대한 응용



    Erscheinungsdatum :

    2023-01-01


    Anmerkungen:

    000000000050▲000000000058▲000000177423▲


    Medientyp :

    Hochschulschrift


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    006 / 629



    A Data-Driven Nonlinear Optimal Control Using Koopman Operator on Hamiltonian Flow

    Sato, Kyosuke / Bando, Mai / Hokamoto, Shinji | TIBKAT | 2023


    Policy Learning with Embedded Koopman Optimal Control

    Yin, Hang / Welle, Michael C. / Kragic, Danica | BASE

    Freier Zugriff

    리프트 시스템에 대한 호출 방법

    STUDER CHRISTIAN / KUSSEROW MARTIN / ZHANG QIXUAN | Europäisches Patentamt | 2023

    Freier Zugriff

    Koopman Operators for Bifurcation Analysis in Hypersonic Aerothermoelasticity

    Gueho, Damien / Macchio, Gregory R. / Huang, Daning et al. | TIBKAT | 2022