Reinforcement Learning (RL) over motion skill space has been verified to generate more diverse behaviors than that over low-level control space, and has exhibited superior autonomous driving performance in complex traffic scenarios. However, the incomplete observations pose challenges in achieving efficient skill exploration under unsupervised conditions, hampering the driving performance and applicability. In this paper, we propose a dynamic-aware RL with hybrid network (Da-HnRL) to develop a pseudo-hierarchical planning framework for better motion skill learning in challenging dense traffics. Based on the semi-POMDP modeling, we construct a hybrid network with skip connections as the RL backbone, facilitating a better understanding of the underlying system dynamics. Then we design an efficiency-oriented reward shaping mechanism to incentivize active skill exploration, promoting enhanced trade-off between exploration and exploitation. Furthermore, we provide a comprehensive scoring mechanism for policy identification, ensuring the near-optimality. We validate the proposed methods on challenging dense-traffic tasks. The results demonstrate the superiority of our approach over previous methods, with improved learning efficiency, driving stability and generalization.
A Pseudo-Hierarchical Planning Framework with Dynamic-Aware Reinforcement Learning for Autonomous Driving
2024 IEEE Intelligent Vehicles Symposium (IV) ; 2345-2352
02.06.2024
4793271 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
DEEP HIERARCHICAL REINFORCEMENT LEARNING FOR AUTONOMOUS DRIVING WITH DISTINCT BEHAVIORS
British Library Conference Proceedings | 2018
|Hierarchical Learned Risk-Aware Planning Framework for Human Driving Modeling
ArXiv | 2024
|