Under efficiency improvement of road networks by utilizing advanced traffic signal control methods, intelligent transportation systems intend to characterize a smart city. Recently, due to significant progress in artificial intelligence, machine learning‐based framework of adaptive traffic signal control has been highly concentrated. In particular, deep Q‐learning neural network is a model‐free technique and can be applied to optimal action selection problems. However, setting variable green time is a key mechanism to reflect traffic fluctuations such that time steps need not be fixed intervals in reinforcement learning framework. In this study, the authors proposed a dynamic discount factor embedded in the iterative Bellman equation to prevent from a biased estimation of action‐value function due to the effects of inconstant time step interval. Moreover, action is added to the input layer of the neural network in the training process, and the output layer is the estimated action‐value for the denoted action. Then, the trained neural network can be used to generate action that leads to an optimal estimated value within a finite set as the agents' policy. The preliminary results show that the trained agent outperforms a fixed timing plan in all testing cases with reducing system total delay by 20%..


    Access

    Download


    Export, share and cite




    Multi-intersection traffic signal control method based on deep reinforcement learning

    LIU LIJUAN / BAI GUANGMING | European Patent Office | 2023

    Free access

    Multi-intersection traffic signal control method based on deep reinforcement learning

    DENG HENG / WANG YULONG / GAO YANG et al. | European Patent Office | 2024

    Free access


    Single-intersection signal control method based on deep reinforcement learning algorithm

    HUANG YIWANG / WU QIAN | European Patent Office | 2023

    Free access