The benefits of controlling a freeway bottleneck using reinforcement-learning-(RL)-based ramp metering (RM) and/or variable speed limit (VSL) controllers are well established. However, in the event of using both RM and VSL to control the freeway, it is not clear how each method benefits the traffic stream in contrast to the other. We argue that, depending on traffic conditions, it may be better to use one and not both, or more importantly, to dynamically switch between the two. Moreover, a learning agent can automate the switch when warranted. In this paper, we offer intensive analysis and performance evaluations for RL as well as regulator-based RM and VSL controllers applied on both a Aimsun simulated hypothetical freeway network from literature and a real-world freeway on-ramp, extracted from Queen Elizabeth Way (QEW) located in Ontario, Canada with different levels of demand. The findings indicate that RM is more effective and beneficial than VSL in heavily congested scenarios as opposed to VSL, which can be beneficial in moderate and low congested scenarios. We also show that RL has the advantage of automatically prioritizing one control method over the other depending on traffic conditions. We demonstrate that in heavy congestion scenarios, the RL control agent that manages both RM and VSL clearly chooses RM over VSL.
Deep Reinforcement Learning Freeway Controller Chooses Ramp Metering Over Variable Speed Limits
Transportation Research Record: Journal of the Transportation Research Board
06.06.2025
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Freeway Optimization Utilizing Ramp Metering
British Library Conference Proceedings | 1994
|Freeway ramp metering: an overview
IEEE | 2002
|Freeway Ramp Metering: An Overview
Online Contents | 2002
|