Adaptive traffic signal control is a promising technique for alleviating traffic congestion. Reinforcement Learning (RL) has the potential to tackle the optimal traffic control problem for a single agent. However, the ultimate goal is to develop integrated traffic control for multiple intersections. Integrated traffic control can be efficiently achieved using decentralized controllers. Multi-Agent Reinforcement Learning (MARL) is an extension of RL techniques that makes it possible to decentralize multiple agents in a non-stationary environments. Most of the studies in the field of traffic signal control consider a stationary environment, an approach whose shortcomings are highlighted in this paper. A Q-Learning-based acyclic signal control system that uses a variable phasing sequence is developed. To investigate the appropriate state model for different traffic conditions, three models were developed, each with different state representation. The models were tested on a typical multiphase intersection to minimize the vehicle delay and were compared to the pre-timed control strategy as a benchmark. The Q-Learning control system consistently outperformed the widely used Webster pre-timed optimized signal control strategy under various traffic conditions.
An agent-based learning towards decentralized and coordinated traffic signal control
01.09.2010
729696 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Dynamic traffic control: decentralized and coordinated methods
Tema Archiv | 1997
|Transition scheme of traffic signal coordinated control
British Library Conference Proceedings | 2022
|