This paper proposes a trajectory tracking control framework for two degrees of freedom (2-DOF) Unmanned Ground Vehicle (UGV) based on Deep Reinforcement Learning (DRL). The Stanley algorithm is used to estimate the initial control inputs of the vehicle, then the DRL adjusts the control inputs to precisely follow a predefined trajectory. The Double Deep Q-Network (DDQN) algorithm, which is a type of DRL, is used to get the best policy during the training task. Two approaches are used during the training process; one approach is based on the yaw error, and the other is based on the yaw and distance errors between the vehicle's path and the trajectory. Two Neural Networks (NNs) are used in the proposed control framework; the target network is used to estimate the future rewards, and the critic network is used to serve as a controller to generate the controls' actions of the vehicle. The proposed control framework is validated through a series of simulations, where the 2-DOF UGV is traveling at 1 m/s, 2m/ s, and 5 m/s on an elliptical track with ~ 420 m in length to evaluate the proposed control framework. The simulation results show that the proposed control framework increases the accuracy of trajectory tracking by ~ 64%. A comparison between the Stanley-based DDQN and the Stanley method alone shows that the Stanley-based DDQN constitutes a robust framework to achieve accurate and adaptive trajectory tracking for high-precision navigation.
Reinforcement Learning for Precision Navigation: DDQN-Based Trajectory Tracking in Unmanned Ground Vehicles
2024-05-21
789563 byte
Conference paper
Electronic Resource
English
Trajectory Tracking and Navigation Model for Autonomous Vehicles Using Reinforcement Learning
Springer Verlag | 2024
|Intersection navigation for unmanned ground vehicles
SPIE | 1996
|Autonomous road navigation for unmanned ground vehicles
SPIE | 1995
|Research on the Model Predictive Trajectory Tracking Control of Unmanned Ground Tracked Vehicles
DOAJ | 2023
|