In this paper, the distributed coordination control of path tracking and nash equilibrium seeking of networked automated ground vehicles systems with unknown dynamics is investigated under the framework of graphical games. Different from existing works assuming that the vehicle dynamics are known, each vehicle with completely unknown system dynamics is considered in this paper. To solve this problem, a learning-based data-driven technique is proposed to identify and reconstruct the unknown system matrices. Then, based on the identified system matrices, an offline reinforcement learning (RL) algorithm is proposed to derive both the optimal control policies and the policy iteration solution for graphical games, as well as its corresponding convergence is analyzed. Besides, an online learning algorithm only relying on the online information of states and inputs in an online way is developed to solve the optimal path tracking control problem. As a result, the requirement of relying on the vehicle’s dynamics in the traditional tracking control protocols is completely relaxed by our proposed method. The optimal distributed control policies found by the proposed RL algorithm satisfies the global Nash equilibrium and synchronizes all tracked vehicles to the pinning vehicle. Numerical simulation results are provided to show the effectiveness of the theoretical analysis.
Cooperative Path Following Control in Autonomous Vehicles Graphical Games: A Data-Based Off-Policy Learning Approach
IEEE Transactions on Intelligent Transportation Systems ; 25 , 8 ; 9364-9374
2024-08-01
8984927 byte
Article (Journal)
Electronic Resource
English
Internal Model-Based Robust Path-Following Control for Autonomous Vehicles
Springer Verlag | 2024
|Path following controller for autonomous vehicles
IEEE | 2019
|