This paper proposes a new approach of using reinforcement learning (RL) to train an agent to perform the task of vehicle following with human driving characteristics. We refer to the ideal of inverse reinforcement learning to design the reward function of the RL model. The factors that need to be weighed in vehicle following were vectorized into reward vectors, and the reward function was defined as the inner product of the reward vector and weights. Driving data of human drivers was collected and analyzed to obtain the true reward function. The RL model was trained with the deterministic policy gradient algorithm because the state and action spaces are continuous. We adjusted the weight vector of the reward function so that the value vector of the RL model could continuously approach that of a human driver. After dozens of rounds of training, we selected the policy with the nearest value vector to that of a human driver and tested it in the PanoSim simulation environment. The results showed the desired performance for the task of an agent following the preceding vehicle safely and smoothly.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Preceding vehicle following algorithm with human driving characteristics


    Contributors:
    Pan, Feng (author) / Bao, Hong (author)


    Publication date :

    2021-06-01


    Size :

    10 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    PRECEDING VEHICLE FOLLOWING TRAVEL CONTROL DEVICE AND METHOD OF CONTROLLING PRECEDING VEHICLE FOLLOWING TRAVEL

    SATAKE TOSHIHIDE / SHIMIZU YUJI / KAKUTA TAKATOSHI | European Patent Office | 2017

    Free access



    DRIVERS' CAR-FOLLOWING CORRELATIVE BEHAVIOR WITH PRECEDING VEHICLES IN MULTILANE DRIVING

    Yu, C. / Wang, J. / Institute of Electrical and Electronics Engineers | British Library Conference Proceedings | 2014