The random exploration nature of reinforcement learning (RL) impedes the way to achieve human-like autonomous driving, owing to the prohibitively high safety requirement. In this paper, we propose a deep refine reinforcement learning (DR2L) approach to removing non-safety-critical actions and reconstructing critical ones, which effectively enhance the efficiency of exploration. The core is to design an action filter based on two-stage vehicle motion model to calculate the critical value of dangerous actions and reconstructe the action space by filtering out the obvious incorrect actions. Besides, we propose to use the beta distribution as the stochastic policy, which eliminates the bias of the Gaussian policy and provides faster convergence. Finally, we design spacial-temporal attention network to extract hidden information of environment as the state to enhance the performance of RL. The simulation shows that DR2L can effectively improve the safety of agent during training process. Our resutls show that the beta policy provides significantly faster convergence over the Gaussian policy when both are used with proximal policy optimization (PPO).
Refine Reinforcement Learning for Safety Training of Autonomous Driving
2024-09-24
1733868 byte
Conference paper
Electronic Resource
English
DEEP REINFORCEMENT LEARNING WITH ENHANCED SAFETY FOR AUTONOMOUS HIGHWAY DRIVING
British Library Conference Proceedings | 2020
|Autonomous Driving with Deep Reinforcement Learning
SLUB | 2023
|European Patent Office | 2022
|