Abstract Driver-preferred route planning often evaluates the quality of a planned route based on how closely it is followed by the driver. Despite decades of research in this area, there still exist nonnegligible deviations from planned routes. Recently, with the prevalence of GPS data, Inverse Reinforcement Learning (IRL) has attracted much interest due to its ability to directly learn routing patterns from GPS trajectories. However, existing IRL methods are limited in that: (1) They rely on numerical approximations to calculate the expected state visitation frequencies (SVFs), which are inaccurate and time-consuming; and (2) They ignore the fact that the coverage of GPS trajectories is skewed toward popular road segments, causing difficulties in learning from sparsely covered ones. To overcome these challenges, we propose a recursive logit-based meta-IRL approach, where (1) We use the recursive logit model to capture drivers’ route choice behavior so that the expected SVFs can be analytically derived, which substantially reduces the computational efforts; and (2) We introduce meta-parameters and employ meta-learning techniques so that the learning on sparsely covered road segments can benefit from that on popular ones. When training our IRL model, we update the rewards of road segments with the expected SVFs by solving several systems of linear equations and update the meta-parameters through a two-level optimization structure to ensure its fast adaption and versatility. We validate our approach using real GPS data in Chengdu, China. Results show that our planned routes better match actual routes compared with state-of-the-art methods including the recursive logit model, Deep-IRL and Dij-IRL: the F1-Score increases by 4.17% with the introduction of the recursive logit model and further increases to 5.19% after meta-learning is employed. Moreover, we can reduce training time by over 95%.

    Highlights We integrate the recursive logit model into the inverse reinforcement learning. The accuracy of SVF increases significantly with time consumed reduced by over 95%. Meta-learning method enables training with extremely limited data while performing well. Our model outperforms state-of-the-art methods in an online ride-hailing dataset.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Recursive logit-based meta-inverse reinforcement learning for driver-preferred route planning


    Beteiligte:
    Zhang, Pujun (Autor:in) / Lei, Dazhou (Autor:in) / Liu, Shan (Autor:in) / Jiang, Hai (Autor:in)


    Erscheinungsdatum :

    2024-02-29




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    A decomposition method for estimating recursive logit based route choice models

    Mai, Tien / Bastin, Fabian / Frejinger, Emma | Online Contents | 2018


    Modeling Driver Behavior using Adversarial Inverse Reinforcement Learning

    Sackmann, Moritz / Bey, Henrik / Hofmann, Ulrich et al. | IEEE | 2022



    Preferred Mode Choice Model for Commuter Purpose Based on Multinominal Logit Model

    Han, Yan ;Guan, Hong Zhi ;Xue, Meng | Trans Tech Publications | 2011