Millimeter-wave (MMW) radar and monocular camera, are most commonly used sensors in perception system of autonomous vehicles. While radar-camera (R-C) fusion has been widely explored for object detection and tracking, few works utilize them to realize 3D multiple object tracking (MOT). This is because neither of the sensor data provide precise and sufficient 3D information. To tackle this problem, this paper proposes an applicable 3D MOT method with R-C fusion. In this work, a suitable 3D object state space model is constructed. Single sensor results are validated before fusion. The challenging spatial-temporal asynchronization is overcome during fusion process. Then we optimize data asso-ciation parameters which are capable to adapt various scenes and sensor properties without manual adjustment. The field test demonstrates the effectiveness of our method. After our optimization, MOTA of 3D MOT with R-C fusion outperforms the baseline by 13.0 % and the tracking error of object distance is reduced by 1.03m.
3D Multiple Object Tracking with Multi-modal Fusion of Low-cost Sensors for Autonomous Driving
2022-10-08
1534529 byte
Conference paper
Electronic Resource
English
M2DA: Multi-Modal Fusion Transformer Incorporating Driver Attention for Autonomous Driving
ArXiv | 2024
|LEVERAGING UNCERTAINTIES FOR DEEP MULTI-MODAL OBJECT DETECTION IN AUTONOMOUS DRIVING
British Library Conference Proceedings | 2020
|