This paper presents the Relaxed Continuous-Time Actor-critic (RCTAC) algorithm, a method for finding the nearly optimal policy for nonlinear continuous-time (CT) systems with known dynamics and infinite horizon, such as the path-tracking control of vehicles. RCTAC has several advantages over existing adaptive dynamic programming algorithms for CT systems. It does not require the “admissibility” of the initialized policy or the input-affine nature of controlled systems for convergence. Instead, given any initial policy, RCTAC can converge to an admissible, and subsequently nearly optimal policy for a general nonlinear system with a saturated controller. RCTAC consists of two phases: a warm-up phase and a generalized policy iteration phase. The warm-up phase minimizes the square of the Hamiltonian to achieve admissibility, while the generalized policy iteration phase relaxes the update termination conditions for faster convergence. The convergence and optimality of the algorithm are proven through Lyapunov analysis, and its effectiveness is demonstrated through simulations and real-world path-tracking tasks.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Relaxed Actor-Critic With Convergence Guarantees for Continuous-Time Optimal Control of Nonlinear Systems


    Contributors:
    Duan, Jingliang (author) / Li, Jie (author) / Ge, Qiang (author) / Li, Shengbo Eben (author) / Bujarbaruah, Monimoy (author) / Ma, Fei (author) / Zhang, Dezhao (author)

    Published in:

    Publication date :

    2023-05-01


    Size :

    1261866 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Actor-Critic Reinforcement Learning for Control With Stability Guarantee

    Han, M / Zhang, L / Wang, J et al. | BASE | 2020

    Free access

    Stepwise Soft Actor–Critic for UAV Autonomous Flight Control

    Ha Jun Hwang / Jaeyeon Jang / Jongkwan Choi et al. | DOAJ | 2023

    Free access

    Actor-Critic Policy Learning in Cooperative Planning

    Redding, Joshua / Geramifard, Alborz / Choi, Han-Lim et al. | AIAA | 2010


    Actor-Critic reinforcement learning for optimal design of piping support constraint combinations

    Jong-Ho Ham / Jung-Eun An / Hee-Sung Lee et al. | DOAJ | 2022

    Free access