Forecasting accurate and consistent future states of traffic participants within the knowledge of uncertainty is imperative for enhancing the social synergies and driving safety in autonomous driving system, particularly under interactive scenarios. Tackling multimodality through direct estimation or regressions remains the major challenge, primarily due to the need to balance the granularity of uncertainties with the sparsity of positive training samples to be updated for respective decoding modes. This work introduces GTR, a multimodal motion prediction framework with a group-wise modal allocation scheme for Transformer-enabled trajectory decoding. Our approach includes several key steps to tackle this challenge. Firstly, we introduced the group-wise allocation strategy, a plugged-in decoding initialization for each modality, thereby densely increase training diversity of positive queries for respective modality. Additionally, a missrate optimization pipeline is instantiated which further maximize the discriminated margins for positive decoding queries. This neat decoding strategy achieved compelling prediction accuracy, social consistency, and outstanding performance across primary metrics in the Waymo Open Motion Dataset (WOMD) leaderboard.
Multi-Modal Motion Prediction with Group-Wise Modal Assignment Transformer for Autonomous Driving
2024-09-24
326875 byte
Conference paper
Electronic Resource
English
M2DA: Multi-Modal Fusion Transformer Incorporating Driver Attention for Autonomous Driving
ArXiv | 2024
|Multi-Modal Motion Prediction with Graphormers
IEEE | 2022
|Semantic-Aware Multi-modal Sensor Fusion for Motion Planning in Autonomous Driving
Springer Verlag | 2022
|