We introduce and study the problem of camera-radar fusion for 3-D depth reconstruction. This problem is motivated by autonomous driving applications, in which we can expect to have access to both front-facing camera and radar sensors. These two sensors are complementary in several respects: the camera is a passive sensor measuring azimuth and elevation; the radar is an active sensor measuring azimuth and range. Fusing their measurements is therefore beneficial. Our fusion solution uses a modified encoder-decoder deep convolutional neural network. We train and evaluate this network on over 100 000 samples collected in highway environments. Our results demonstrate an improvement in reconstruction accuracy and robustness from fusing the two sensors.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Camera-Radar Fusion for 3-D Depth Reconstruction


    Beteiligte:


    Erscheinungsdatum :

    19.10.2020


    Format / Umfang :

    3259227 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    CAMERA-RADAR FUSION FOR 3-D DEPTH RECONSTRUCTION

    Niesen, Urs / Unnikrishnan, Jayakrishnan | British Library Conference Proceedings | 2020


    Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios

    Zheng, Lianqing / Ai, Wenjin / Ma, Zhixiong | SAE Technical Papers | 2023


    CAMERA-RADAR FUSION USING CORRESPONDENCES

    MICHIELIN FRANCESCO / VOGEL OLIVER | Europäisches Patentamt | 2023

    Freier Zugriff

    RADAR-CAMERA FUSION FOR VEHICLE NAVIGATION

    ROSENBLUM KEVIN / DAGAN EREZ / BOUBLIL DAVID et al. | Europäisches Patentamt | 2024

    Freier Zugriff