In urban areas, traffic congestion is a major problem causing increased travel times, higher consumption of fuel, and environmental pollution. The traditional traffic control systems cannot adapt to dynamic traffic conditions, causing inefficient traffic flow and inefficient use of road infrastructure. In this paper, we propose a novel method to enhance traffic management strategies through dynamic simulation and reinforcement learning (RL). A simulation framework is presented that models the complex interactions among vehicles, traffic signals, and infrastructure in a realistic urban environment. Within this framework, RL agents are deployed to learn adaptive traffic control policies that minimize congestion and optimize traffic flow. This approach uses the ability of RL to continuously learn and adjust to shifting traffic circumstances, which results in improved efficiency and responsiveness compared to static or rule-based control systems. Finally, evaluate the effectiveness of the approach through extensive simulations and demonstrate its capability to significantly reduce congestion, decrease travel times, decrease average waiting time and enhance overall traffic management. By integrating dynamic simulation and RL-based control strategies, the proposed approach offers a guaranteed solution for addressing the challenges of urban traffic congestion and advancing the field of intelligent transportation systems.
Enhancing Traffic Control Strategies through Dynamic Simulation and Reinforcement Learning
05.06.2024
871293 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Enhancing UAS Integration in Controlled Traffic Regions Through Reinforcement Learning
DOAJ | 2025
|