Incident-induced congestion is one of the main causes for delays on motorways. Strategies for managing such congestion using traffic control technologies can be classified into model-based and model-free methods. Both methods possess their own merits but also have drawbacks. Dyna-Q architecture is a method that can combine model-free learning and model-based planning together to obtain the benefits from both sides. Based on the Dyna-Q architecture, an indirect reinforcement learning (IRL) approach is derived in this study. The new method is compared with two other methods, namely DRL and ALINEA. Simulation experiment results show that, with suitable weight values, IRL can achieve a superior performance in many scenarios. Moreover, compared with DRL, IRL has a much faster learning speed.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    An indirect reinforcement learning approach for ramp control under incident-induced congestion


    Beteiligte:
    Lu, Chao (Autor:in) / Chen, Haibo (Autor:in) / Grant-Muller, Susan (Autor:in)


    Erscheinungsdatum :

    2013-10-01


    Format / Umfang :

    484679 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Optimal ramp control for incident response

    Shaw, Leonard / McShane, William R. | Elsevier | 1972



    Ramp Metering for Congestion Relief

    Waters, M. G. / Institute of Transportation Engineers | British Library Conference Proceedings | 2007


    Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Chao Lu / Jie Huang / Jianwei Gong | DOAJ | 2016

    Freier Zugriff