Incident-induced congestion is one of the main causes for delays on motorways. Strategies for managing such congestion using traffic control technologies can be classified into model-based and model-free methods. Both methods possess their own merits but also have drawbacks. Dyna-Q architecture is a method that can combine model-free learning and model-based planning together to obtain the benefits from both sides. Based on the Dyna-Q architecture, an indirect reinforcement learning (IRL) approach is derived in this study. The new method is compared with two other methods, namely DRL and ALINEA. Simulation experiment results show that, with suitable weight values, IRL can achieve a superior performance in many scenarios. Moreover, compared with DRL, IRL has a much faster learning speed.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    An indirect reinforcement learning approach for ramp control under incident-induced congestion


    Contributors:


    Publication date :

    2013-10-01


    Size :

    484679 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    Optimal ramp control for incident response

    Shaw, Leonard / McShane, William R. | Elsevier | 1972



    Ramp Metering for Congestion Relief

    Waters, M. G. / Institute of Transportation Engineers | British Library Conference Proceedings | 2007


    Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Chao Lu / Jie Huang / Jianwei Gong | DOAJ | 2016

    Free access