This paper presents a novel pedestrian trajectory prediction framework for autonomous driving systems in multimodality sensor settings where the set of sensors (i.e., camera, LiDAR, etc.) can be used for prediction task. While achiveving promising results, the existing camera-based prediction models often fail to capture more complex human motion in realworld. Our work mitigate this issue by proposing a multimodal sensor framework for pedestrian trajectory prediction (xMTP), consisting of two branches of predictors. On the first branch, we leverage the promising performances of the existing camera-based prediction models, such as BiTrap [1] and PIE [2], to model the human intentions. We refer to the existing camera-based prediction models as native predictors, that can potentially be any off-the-shelf prediction model. On the other branch, we propose a novel LiDARbased predictor (P3D) model the pedestrians' $\mathbf{3D}$ movements as LiDAR data provides rich depth information that help reason about pedestrian motions in real world. We then develop a novel learnable prediction sharing with uncertainty estimation (PSUE) to quantify the uncertainties of predictions from the two branches relative to their past performances on observed pedestrian motions to select the best prediction for the given scenarios. Through extensive experiments on commonly-used datasets: KITTI [3] and SHIFT [4], consisting of both camera and LiDAR data, we demonstrate the effectiveness of our framework in incorporating two different state-of-the-art native predictors, BiTrap [1] and PIE [2], under different pedestrian movement scenarios.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multi-Modal Sensor Framework with Learnable Uncertainty Estimator for Pedestrian Trajectory Prediction


    Contributors:


    Publication date :

    2023-09-24


    Size :

    3048614 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    Multi-modal Learnable Queries for Image Aesthetics Assessment

    Xiong, Zhiwei / Zhang, Yunfan / Shen, Zhiqi et al. | ArXiv | 2024

    Free access


    Multi-modal trajectory prediction method

    JIANG WENJUAN / JIN ZHI / WANG REN et al. | European Patent Office | 2023

    Free access