Driving behavior modeling is of great importance for designing safe, smart, and personalized autonomous driving systems. In this paper, an internal reward function-based driving model that emulates the human’s decision-making mechanism is utilized. To infer the reward function parameters from naturalistic human driving data, we propose a structural assumption about human driving behavior that focuses on discrete latent driving intentions. It converts the continuous behavior modeling problem to a discrete setting and thus makes maximum entropy inverse reinforcement learning (IRL) tractable to learn reward functions. Specifically, a polynomial trajectory sampler is adopted to generate candidate trajectories considering high-level intentions and approximate the partition function in the maximum entropy IRL framework. An environment model considering interactive behaviors among the ego and surrounding vehicles is built to better estimate the generated trajectories. The proposed method is applied to learn personalized reward functions for individual human drivers from the NGSIM highway driving dataset. The qualitative results demonstrate that the learned reward functions are able to explicitly express the preferences of different drivers and interpret their decisions. The quantitative results reveal that the learned reward functions are robust, which is manifested by only a marginal decline in proximity to the human driving trajectories when applying the reward function in the testing conditions. For the testing performance, the personalized modeling method outperforms the general modeling approach, significantly reducing the modeling errors in human likeness (a custom metric to gauge accuracy), and these two methods deliver better results compared to other baseline methods. Moreover, it is found that predicting the response actions of surrounding vehicles and incorporating their potential decelerations caused by the ego vehicle are critical in estimating the generated trajectories, and the accuracy of personalized planning using the learned reward functions relies on the accuracy of the forecasting model.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Driving Behavior Modeling Using Naturalistic Human Driving Data With Inverse Reinforcement Learning


    Beteiligte:
    Huang, Zhiyu (Autor:in) / Wu, Jingda (Autor:in) / Lv, Chen (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    2022-08-01


    Format / Umfang :

    2389348 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Driving Style Clustering using Naturalistic Driving Data

    Chen, Kuan-Ting / Chen, Huei-Yen Winnie | Transportation Research Record | 2019


    Learning From Naturalistic Driving Data for Human-Like Autonomous Highway Driving

    Xu, Donghao / Ding, Zhezhang / He, Xu et al. | IEEE | 2021


    Studying Driving Behavior on Horizontal Curves using Naturalistic Driving Study Data

    Dhahir, Bashar / Hassan, Yasser | Transportation Research Record | 2018


    Incorporating Driving Behavior Metrics Derived from Naturalistic Driving Data into Macroscopic Safety Modeling

    Medina, Juan C. / Saleem, Taha / Lan, Bo | Transportation Research Record | 2024


    Driving Maneuvers Analysis Using Naturalistic Highway Driving Data

    Li, Guofa / Li, Shengbo Eben / Jia, Lijuan et al. | IEEE | 2015