This paper proposes an adversarial reinforcement learning (RL)-based traffic control strategy to improve the traffic efficiency of an integrated network with expressway and adjacent surface streets. The proposed adversarial RL integrates adversarial learning into a multi-agent RL model, namely the multi-agent advantage actor-critic (MA2C), so as to enhance the generalization of the control strategy against the mismatch between an offline-training environment and the real traffic process. In the adversarial RL, the RL model is trained to maximize the network throughput, while the adversarial network, which produces disturbances to observed traffic states, is trained based on the opposite reward of the RL model. The proposed control strategy is tested using the microscopic traffic simulation software, SUMO. To reproduce the difference between an offline-training environment and the real traffic process, two different traffic models in SUMO are used for offline-training and online-testing purposes, respectively. Simulation results demonstrate that the proposed approach performs better in reducing total time spent than the original MA2C approach, as well as a conventional feedback based controller.
Coordinated Control of Urban Expressway Integrating Adjacent Signalized Intersections Using Adversarial Network Based Reinforcement Learning Method
IEEE Transactions on Intelligent Transportation Systems ; 25 , 2 ; 1857-1871
2024-02-01
5496040 byte
Article (Journal)
Electronic Resource
English
Engineering Index Backfile | 1968
|Physics-Informed Particle-Based Reinforcement Learning for Autonomy in Signalized Intersections
Springer Verlag | 2024
|Physics-Informed Particle-Based Reinforcement Learning for Autonomy in Signalized Intersections
Springer Verlag | 2024
|Eco-driving at signalized intersections: a parameterized reinforcement learning approach
Taylor & Francis Verlag | 2023
|