Abstract We propose two classes of algorithms for achieving the user equilibrium in simulation-based dynamic traffic assignment with special attention given to the interactions between travel information and route choice behavior. A driver is assumed to perform day-to-day route choice repeatedly and experience payoffs with unknown noise. The driver adaptively changes his/her action in accordance with the payoffs that are initially unknown and must be estimated over time due to noisy observations. To solve this problem, we develop a multi-agent version of Q-learning to estimate the payoff functions using novel forms of the ε−greedy learning policy. We apply this Q-learning scheme for the simulation-based DTA in which traffic flow and travel times of routes in the traffic network are generated by a microscopic traffic simulator based on cellular automaton. Finally, we provide simulation examples to show convergence of our algorithms to Nash equilibrium and effectiveness of the best-route provision services.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Adaptive Learning Algorithms for Simulation-Based Dynamic Traffic User Equilibrium


    Contributors:


    Publication date :

    2018-01-10


    Size :

    12 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Dynamic equilibrium assignment with microscopic traffic simulation

    Liu, H.X. / Wenteng Ma, / Ban, J.X. et al. | IEEE | 2005


    Heuristic algorithms for simulation-based dynamic traffic assignment

    Tong, C.O. / Wong, S.C. | Taylor & Francis Verlag | 2010


    Dynamic Equilibrium Assignment with Microscopic Traffic Simulation

    Liu, H. X. / Ma, W. / Ban, J. X. et al. | British Library Conference Proceedings | 2005



    Dynamic Traffic Equilibrium

    Ramadurai, Gitakrishnan / Ukkusuri, Satish V. | Transportation Research Record | 2007