This work presents an approach for capturing vehicle-following behavior on highways, based on Inverse Re-inforcement Learning (IRL) with control Lyapunov function. The idea is to describe the vehicle-following behavior as an optimal control problem with its underlying cost and constraints expressed in terms of the data. Using the highD dataset as case study, we identify vehicle-following dynamics and frame them in an IRL context. Using kernel regression, we show that such IRL boils down to a Quadratic Programming (QP), solvable using standard optimization routines.
Lyapunov-Based Inverse Reinforcement Learning for Vehicle-Following Traffic Scenarios
24.09.2024
1050507 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Traffic light control method based on deep reinforcement learning and inverse reinforcement learning
Europäisches Patentamt | 2023
|