Deep reinforcement learning (RL) has been widely applied to motion planning problems of autonomous vehicles in urban traffic. However, traditional deep RL algorithms cannot ensure safe trajectories throughout training and deployment. We propose a provably safe RL algorithm for urban autonomous driving to address this. We add a novel safety layer to the RL process to verify the safety of high-level actions before they are performed. Our safety layer is based on invariably safe braking sets to constrain actions for safe lane changing and safe intersection crossing. We introduce a generalized discrete high-level action space, which can represent all high-level intersection driving maneuvers and various desired accelerations. Finally, we conducted extensive experiments on the inD dataset containing urban driving scenarios. Our analysis demonstrates that the safe agent never causes a collision and that the safety layer's lane changing verification can even improve the goal-reaching performance compared to the unsafe baseline agent.
Safe Reinforcement Learning for Urban Driving using Invariably Safe Braking Sets
08.10.2022
905660 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
PROVABLY-SAFE COOPERATIVE DRIVING VIA INVARIABLY SAFE SETS
British Library Conference Proceedings | 2020
|SAFE DEEP REINFORCEMENT LEARNING FOR ADAPTIVE CRUISE CONTROL BY IMPOSING STATE-SPECIFIC SAFE SETS
British Library Conference Proceedings | 2021
|Safe-braking control system, mine hoist and safe-braking control method
Europäisches Patentamt | 2019
|