In this paper, we present a safe deep reinforcement learning system for automated driving. The proposed framework leverages merits of both rule-based and learning-based approaches for safety assurance. Our safety system consists of two modules namely handcrafted safety and dynamically-learned safety. The handcrafted safety module is a heuristic safety rule based on common driving practice that ensure a minimum relative gap to a traffic vehicle. On the other hand, the dynamically-learned safety module is a data-driven safety rule that learns safety patterns from driving data. Specifically, the dynamically-leaned safety module incorporates a model lookahead beyond the immediate reward of reinforcement learning to predict safety longer into the future. If one of the future states leads to a near-miss or collision, then a negative reward will be assigned to the reward function to avoid collision and accelerate the learning process. We demonstrate the capability of the proposed framework in a simulation environment with varying traffic density. Our results show the superior capabilities of the policy enhanced with dynamically-learned safety module.
Deep Reinforcement Learning with Enhanced Safety for Autonomous Highway Driving
2020 IEEE Intelligent Vehicles Symposium (IV) ; 1550-1555
2020-10-19
382651 byte
Conference paper
Electronic Resource
English
DEEP REINFORCEMENT LEARNING WITH ENHANCED SAFETY FOR AUTONOMOUS HIGHWAY DRIVING
British Library Conference Proceedings | 2020
|Autonomous Driving with Deep Reinforcement Learning
SLUB | 2023
|A deep reinforcement learning-based approach for autonomous driving in highway on-ramp merge
SAGE Publications | 2021
|