In this paper, we address the problem of learning and modelling the behaviours of agents in urban traffic environments such as pedestrians using their trajectories. Ex-isting state-of-the-art methods primarily rely on data-driven approaches to predict future trajectories. However, these approaches often overlook the influence of the physical environment on agents' decisions and struggle to model longer sequential trajectory data effectively. To overcome these lim-itations, we propose a novel hybrid framework in this paper that uses the attributes of the physical environment to predict the future trajectory that a travel agent might take on the road. First, we capture agents' preferences in various urban traffic environments using a deep reward learning technique. Next, leveraging the learned reward map and short past motion trajectories of the agents, we employ a probabilistic data-driven sequential model based on transformer networks to provide robust long-term forecasting of agents' trajectories. In our experiments, the proposed framework was evaluated on a large-scale real-world dataset of agents in urban traffic environments. Compared to state-of-the-art techniques, our framework achieves a substantial improvement by a significant margin.
Agent Trajectory Prediction in Urban Traffic Environments via Deep Reward Learning
2024-09-24
1133578 byte
Conference paper
Electronic Resource
English
Network-Wide Vehicle Trajectory Prediction in Urban Traffic Networks using Deep Learning
Transportation Research Record | 2018
|Extracting Traffic Conflict at Urban Intersection Using Deep Learning Trajectory Detection
Springer Verlag | 2023
|Leverage Deep Learning Methods for Vehicle Trajectory Prediction in Chaotic Traffic
Springer Verlag | 2023
|Agent Reward Shaping for Alleviating Traffic Congestion
NTRS | 2006
|