This paper presents a novel pedestrian trajectory prediction framework for autonomous driving systems in multimodality sensor settings where the set of sensors (i.e., camera, LiDAR, etc.) can be used for prediction task. While achiveving promising results, the existing camera-based prediction models often fail to capture more complex human motion in realworld. Our work mitigate this issue by proposing a multimodal sensor framework for pedestrian trajectory prediction (xMTP), consisting of two branches of predictors. On the first branch, we leverage the promising performances of the existing camera-based prediction models, such as BiTrap [1] and PIE [2], to model the human intentions. We refer to the existing camera-based prediction models as native predictors, that can potentially be any off-the-shelf prediction model. On the other branch, we propose a novel LiDARbased predictor (P3D) model the pedestrians' $\mathbf{3D}$ movements as LiDAR data provides rich depth information that help reason about pedestrian motions in real world. We then develop a novel learnable prediction sharing with uncertainty estimation (PSUE) to quantify the uncertainties of predictions from the two branches relative to their past performances on observed pedestrian motions to select the best prediction for the given scenarios. Through extensive experiments on commonly-used datasets: KITTI [3] and SHIFT [4], consisting of both camera and LiDAR data, we demonstrate the effectiveness of our framework in incorporating two different state-of-the-art native predictors, BiTrap [1] and PIE [2], under different pedestrian movement scenarios.
Multi-Modal Sensor Framework with Learnable Uncertainty Estimator for Pedestrian Trajectory Prediction
2023-09-24
3048614 byte
Conference paper
Electronic Resource
English