Deep Reinforcement Learning (DRL) has achieved great success in traffic signal control. Most DRL-based methods regard intersections as agents, which cooperate in a decentralized way. There are two main issues with the decentralized way: cooperation and stability. To overcome these issues, we propose a novel centralized control method with a global agent to control the whole network. To mitigate the curse of dimensionality problem, we use three techniques: first, a decomposition mechanism is proposed to decompose the high dimensional state-action space; second, an action-feedback technique is introduced to learn the temporal pattern from the historical decisions so as to improve the decision-making; third, a GAT model is applied to learn the spatial feature of surrounding intersection to effectively estimate the future rewards. By using the three techniques, our model can easily tackle the large-scale traffic network. We conduct extensive experiments on both synthetic and real-world data. The experiment results demonstrate that our model outperforms the traditional and state-of-the-art DRL-based control methods.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    A Spatial-Temporal Deep Reinforcement Learning Model for Large-Scale Centralized Traffic Signal Control


    Beteiligte:
    Yi, Chenglin (Autor:in) / Wu, Jia (Autor:in) / Ren, Yanyu (Autor:in) / Ran, Yunchuan (Autor:in) / Lou, Yican (Autor:in)


    Erscheinungsdatum :

    08.10.2022


    Format / Umfang :

    746977 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch