To address the growing complexity of urban traffic congestion and its associated environmental impacts, this study presents a pioneering application of the Gaussian plume model to investigate the carbon dioxide emission reduction efficacy of various reinforcement learning algorithms within a traffic signal control framework. By employing an insightful fusion of the traditional environmental science tool with contemporary reinforcement learning strategies - specifically Independent Partially Observable Policy Optimization (IPPO), Independent Delay Q-Network (IDQN), and MPLight - this research marks a novel intersection of methodologies. By quantitatively simulating and analyzing the diffusion dynamics of carbon dioxide pollutants under different traffic signal control scenarios, the study not only highlights the innovative use of the Gaussian plume model to assess the environmental impact of traffic signal control, but also provides critical insights into the selection and optimization of traffic signal control algorithms for improved urban environmental sustainability.
Reinforcement Learning Based Urban Traffic Signal Control and Its Impact Assessment on Environmental Pollution
2024
Article (Journal)
Electronic Resource
Unknown
Metadata by DOAJ is licensed under CC BY-SA 1.0
Environmental perception traffic signal control method based on reinforcement learning
European Patent Office | 2025
|Reinforcement learning-based traffic signal control
IEEE | 2024
|Adaptive Traffic Signal Control for Urban Corridor Based on Reinforcement Learning
Springer Verlag | 2024
|