Vision-centric motion prediction concentrates on accurately determining the instance mask and its future trajectory from surround-view cameras, which manifests inherent merits such as holistic perspective and fully-differentiable spirit. Nonetheless, it is still impeded by sparse bird’s-eye view (BEV) representation and unfavorable temporal context across frames, resulting in a sub-optimal solution to decision-making and vehicle navigation. In this work, we propose a novel Difference-guide Motion Prediction for vision-centric autonomous driving, that is DMP, where it integrates BEV map refinement with spatial-temporal relation modeling in a hierarchical manner. Specifically, a bidirectional view projection strategy is introduced for the complementary BEV feature generation via depth-consistency correction. To promote spatiotemporal context aggregation, we design a difference-guided motion approach by offset approximation to align motion-aware cues between adjacent frames, and a dual-stream pyramid module is further developed for historical information fusion and future instance segmentation during specific durations. Extensive experiments on the large-scale nuScenes dataset demonstrate that it outperforms the baselines by a remarkable margin and delivers competitive motion prediction across diverse scenarios and range settings, suggesting its effectiveness and superiority. The details will be available at https://github.com/pupu-chenyanyan/DMP-VAD.
DMP: Difference-Guided Motion Prediction for Vision-Centric Autonomous Driving
IEEE Transactions on Intelligent Transportation Systems ; 26 , 6 ; 9094-9108
2025-06-01
3622349 byte
Article (Journal)
Electronic Resource
English
System and Method for Motion Prediction in Autonomous Driving
European Patent Office | 2023
|Evaluation of Motion Sickness Prediction Models for Autonomous Driving
Springer Verlag | 2022
|