Deep reinforcement learning (DRL) algorithms often face challenges in achieving stability and efficiency due to significant policy gradient variance and inaccurate reward function estimation in complex scenarios. This study addresses these issues in the context of multi-objective car-following control tasks with time lag in traffic oscillations. We propose an expert demonstration reinforcement learning (EDRL) approach that aims to stabilize training, accelerate learning, and enhance car-following performance. The key idea is to leverage expert demonstrations, which represent superior car-following control experiences, to improve the DRL policy. Our method involves two sequential steps. In the first step, expert demonstrations are obtained during offline pretraining by utilizing prior traffic knowledge, including car-following trajectories from an empirical database and classic car-following models. In the second step, expert demonstrations are obtained during online training, where the agent interacts with the car-following environment. The EDRL agents are trained through supervised regression on the expert demonstrations using the behavioral cloning technique. Experimental results conducted in various traffic oscillation scenarios demonstrate that our proposed method significantly enhances training stability, learning speed, and rewards compared to baseline algorithms.
Enhancing Car-Following Performance in Traffic Oscillations Using Expert Demonstration Reinforcement Learning
IEEE Transactions on Intelligent Transportation Systems ; 25 , 7 ; 7751-7766
01.07.2024
3000754 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Enhancing UAS Integration in Controlled Traffic Regions Through Reinforcement Learning
DOAJ | 2025
|Enhancing Urban Pollution Reduction via Reinforcement Learning-Based Traffic Light Optimization
Springer Verlag | 2025
|