This work presents an approach for capturing vehicle-following behavior on highways, based on Inverse Re-inforcement Learning (IRL) with control Lyapunov function. The idea is to describe the vehicle-following behavior as an optimal control problem with its underlying cost and constraints expressed in terms of the data. Using the highD dataset as case study, we identify vehicle-following dynamics and frame them in an IRL context. Using kernel regression, we show that such IRL boils down to a Quadratic Programming (QP), solvable using standard optimization routines.
Lyapunov-Based Inverse Reinforcement Learning for Vehicle-Following Traffic Scenarios
2024-09-24
1050507 byte
Conference paper
Electronic Resource
English
Traffic light control method based on deep reinforcement learning and inverse reinforcement learning
European Patent Office | 2023
|