Millimeter-wave (MMW) radar and monocular camera, are most commonly used sensors in perception system of autonomous vehicles. While radar-camera (R-C) fusion has been widely explored for object detection and tracking, few works utilize them to realize 3D multiple object tracking (MOT). This is because neither of the sensor data provide precise and sufficient 3D information. To tackle this problem, this paper proposes an applicable 3D MOT method with R-C fusion. In this work, a suitable 3D object state space model is constructed. Single sensor results are validated before fusion. The challenging spatial-temporal asynchronization is overcome during fusion process. Then we optimize data asso-ciation parameters which are capable to adapt various scenes and sensor properties without manual adjustment. The field test demonstrates the effectiveness of our method. After our optimization, MOTA of 3D MOT with R-C fusion outperforms the baseline by 13.0 % and the tracking error of object distance is reduced by 1.03m.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    3D Multiple Object Tracking with Multi-modal Fusion of Low-cost Sensors for Autonomous Driving


    Beteiligte:
    Zhou, Taohua (Autor:in) / Jiang, Kun (Autor:in) / Wang, Sijia (Autor:in) / Shi, Yining (Autor:in) / Yang, Mengmeng (Autor:in) / Ren, Weining (Autor:in) / Yang, Diange (Autor:in)


    Erscheinungsdatum :

    2022-10-08


    Format / Umfang :

    1534529 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing

    Karle, Phillip / Fent, Felix / Huch, Sebastian et al. | IEEE | 2023



    Leveraging Uncertainties for Deep Multi-modal Object Detection in Autonomous Driving

    Feng, Di / Cao, Yifan / Rosenbaum, Lars et al. | IEEE | 2020