Driven by advances in artificial intelligence, deep reinforcement learning (DRL) has made remarkable strides in adaptive traffic signal control (ATSC), empowering improved handling of fluctuating traffic volumes and congestion. However, in most existing studies, trained agents exhibit poor transferability in scenarios with varying vehicle turning ratios, and the switching rules for the signal stage sequence do not align with the actual traffic demands. To address these issues, this paper presents action masking based proximal policy optimization with the dual-ring phase structure (AMPPO-DR), a novel ATSC model based on DRL that can simultaneously optimize the stage sequence and duration. Specifically, we consider the correlation between states and actions and utilize intersection channelization to predict vehicle turning directions. Moreover, we define the action as selecting the next green stage and establish variable-stage-sequence constraint rules on the basis of the dual-ring phase structure. To satisfy the constraints on the stage sequence, we propose the AMPPO algorithm, which dynamically adjusts the policy network outputs to mask invalid stages in real time. The simulation experiments demonstrate that the proposed method can effectively adapt to changing turning flows, enabling flexible and rational stage switching and ultimately increasing traffic efficiency.
Action Masking-Based Proximal Policy Optimization With the Dual-Ring Phase Structure for Adaptive Traffic Signal Control
IEEE Transactions on Intelligent Transportation Systems ; 26 , 2 ; 2422-2433
2025-02-01
3159263 byte
Article (Journal)
Electronic Resource
English
Improving traffic signal control operations using proximal policy optimization
Wiley | 2023
|Improving traffic signal control operations using proximal policy optimization
DOAJ | 2023
|