Multi-modal 3D object detection models for automated driving have demonstrated exceptional performance on computer vision benchmarks like nuScenes. However, their reliance on densely sampled LiDAR point clouds and meticulously calibrated sensor arrays poses challenges for real-world applications. Issues such as sensor misalignment, miscalibration, and disparate sampling frequencies lead to spatial and temporal misalignment in data from LiDAR and cameras. Additionally, the integrity of LiDAR and camera data is often compromised by adverse environmental conditions such as inclement weather, leading to occlusions and noise interference. To address this challenge, we introduce MultiCorrupt, a comprehensive benchmark designed to evaluate the robustness of multi-modal 3D object detectors against ten distinct types of corruptions. We evaluate five state-of-the-art multi-modal detectors on MultiCorrupt and analyze their performance in terms of their resistance ability. Our results show that existing methods exhibit varying degrees of robustness depending on the type of corruption and their fusion strategy. We provide insights into which multi-modal design choices make such models robust against certain perturbations. The dataset generation code and benchmark are open-sourced at https://github.com/ika-rwth-aachen/MultiCorrupt.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    MultiCorrupt: A Multi-Modal Robustness Dataset and Benchmark of LiDAR-Camera Fusion for 3D Object Detection*


    Contributors:


    Publication date :

    2024-06-02


    Size :

    2376232 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    InfraDet3D: Multi-Modal 3D Object Detection based on Roadside Infrastructure Camera and LiDAR Sensors

    Zimmer, Walter / Birkner, Joseph / Brucker, Marcel et al. | IEEE | 2023


    Multi-Object Tracking with Object Candidate Fusion for Camera and LiDAR Data

    Yin, Huilin / Lu, Yu / Lin, Jia et al. | IEEE | 2023


    Deep Learning-based Radar, Camera, and Lidar Fusion for Object Detection

    Nobis, Felix Otto Geronimo | TIBKAT | 2022

    Free access

    MULTI-STAGE RESIDUAL FUSION NETWORK FOR LIDAR-CAMERA ROAD DETECTION

    Yu, Dameng / Xiong, Hui / Xu, Qing et al. | British Library Conference Proceedings | 2019