Coordinated lane-assignment strategies offer promising solutions for improving traffic conditions. By anticipating and re-positioning connected vehicles in response to potential downstream events, such systems can greatly improve the safety and efficiency of existing networks. Assigning said decisions, however, grows exponentially more complex as the scale of target networks expands. In this paper, we explore solutions to optimal lane assignment at the macroscopic level of traffic, whereby decisions are aggregated across multiple vehicles clustered spatially into sections. This approach reduces some of the challenges around scalability, but introduces dynamical interactions at the microscopic level that render higher-level decision-making complexities. To this point, we provide results demonstrating that reinforcement learning (RL) strategies are capable of generating responses that efficiently coordinate the lateral flow of vehicles across multiple road sections. In particular, we find that RL methods can robustly identify and maneuver vehicles around bottlenecks placed randomly within a given network, and in doing so substantively reduce the the traveling time for both human-driven and connected vehicles.
Lateral flow control of connected vehicles through deep reinforcement learning
2023-06-04
2412358 byte
Conference paper
Electronic Resource
English
COOPERATIVE PERCEPTION WITH DEEP REINFORCEMENT LEARNING FOR CONNECTED VEHICLES
British Library Conference Proceedings | 2020
|Deep Reinforcement Learning in Lane Merge Coordination for Connected Vehicles
ArXiv | 2020
|