This study compares Deep Reinforcement Learning (DRL) and Model Predictive Control (MPC) for Adaptive Cruise Control (ACC) design in car-following scenarios. A first-order system is used as the Control-Oriented Model (COM) to approximate the acceleration command dynamics of a vehicle. Based on the equations of the control system and the multi-objective cost function, we train a DRL policy using Deep Deterministic Policy Gradient (DDPG) and solve the MPC problem via Interior-Point Optimization (IPO). Simulation results for the episode costs show that, when there are no modeling errors and the testing inputs are within the training data range, the DRL solution is equivalent to MPC with a sufficiently long prediction horizon. Particularly, the DRL episode cost is only 5.8% higher than the benchmark optimal control solution provided by optimizing the entire episode via IPO. The DRL control performance degrades when the testing inputs are outside the training data range, indicating inadequate machine learning generalization. When there are modeling errors due to control delay, disturbances, and/or testing with a High-Fidelity Model (HFM) of the vehicle, the DRL-trained policy performs better when the modeling errors are large while having similar performances as MPC when the modeling errors are small.
Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise Control
IEEE Transactions on Intelligent Vehicles ; 6 , 2 ; 221-231
01.06.2021
2705573 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
British Library Conference Proceedings | 2019
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|