This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence. In the proposed control design, a nominal system is considered for the design of a baseline tracking controller using a conventional control approach. The nominal system also defines the desired behaviour of uncertain autonomous surface vehicles in an obstacle-free environment. Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance at the same time in environments with obstacles. In comparison to traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm using an example of autonomous surface vehicles.
Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles
IEEE Transactions on Intelligent Transportation Systems ; 23 , 7 ; 8770-8781
2022-07-01
2221435 byte
Article (Journal)
Electronic Resource
English
Trajectory Tracking and Navigation Model for Autonomous Vehicles Using Reinforcement Learning
Springer Verlag | 2024
|Vehicles Control: Collision Avoidance using Federated Deep Reinforcement Learning
ArXiv | 2023
|Collision Free Guidance of Autonomous Road Vehicles
British Library Conference Proceedings | 1993
|