The majority of current studies on autonomous vehicle control via deep reinforcement learning (DRL) utilize point-mass kinematic models, neglecting vehicle dynamics which includes acceleration delay and acceleration command dynamics. The acceleration delay, which results from sensing and actuation delays, results in delayed execution of the control inputs. The acceleration command dynamics dictates that the actual vehicle acceleration does not rise up to the desired command acceleration instantaneously due to dynamics. In this work, we investigate the feasibility of applying DRL controllers trained using vehicle kinematic models to more realistic driving control with vehicle dynamics. We consider a particular longitudinal car-following control, i.e., Adaptive Cruise Control (ACC), problem solved via DRL using a point-mass kinematic model. When such a controller is applied to car following with vehicle dynamics, we observe significantly degraded car-following performance. Therefore, we redesign the DRL framework to accommodate the acceleration delay and acceleration command dynamics by adding the delayed control inputs and the actual vehicle acceleration to the reinforcement learning environment state, respectively. The training results show that the redesigned DRL controller results in near-optimal control performance of car following with vehicle dynamics considered when compared with dynamic programming solutions.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning


    Contributors:
    Lin, Yuan (author) / McPhee, John (author) / Azad, Nasser L. (author)


    Publication date :

    2019-10-01


    Size :

    485199 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    Dynamic Car-following Model Calibration with Deep Reinforcement Learning

    Naing, Htet / Cai, Wentong / Wu, Tiantian et al. | IEEE | 2022


    Proactive Car-Following Using Deep-Reinforcement Learning

    Yen, Yi-Tung / Chou, Jyun-Jhe / Shih, Chi-Sheng et al. | IEEE | 2020


    ADAPTIVE LONGITUDINAL CONTROL USING REINFORCEMENT LEARNING

    PATHAK SHASHANK / NADKARNI VIJAY JAYANT / BAG SUVAM | European Patent Office | 2019

    Free access

    ADAPTIVE LONGITUDINAL CONTROL USING REINFORCEMENT LEARNING

    PATHAK SHASHANK / NADKARNI VIJAY JAYANT / BAG SUVAM | European Patent Office | 2019

    Free access