Driven by deep learning techniques, perception technology in autonomous driving has developed rapidly in recent years, enabling vehicles to accurately detect and interpret surrounding environment for safe and efficient navigation. To achieve accurate and robust perception capabilities, autonomous vehicles are often equipped with multiple sensors, making sensor fusion a crucial part of the perception system. Among these fused sensors, radars and cameras enable a complementary and cost-effective perception of the surrounding environment regardless of lighting and weather conditions. This review aims to provide a comprehensive guideline for radar-camera fusion, particularly concentrating on perception tasks related to object detection and semantic segmentation. Based on the principles of the radar and camera sensors, we delve into the data processing process and representations, followed by an in-depth analysis and summary of radar-camera fusion datasets. In the review of methodologies in radar-camera fusion, we address interrogative questions, including “why to fuse”, “what to fuse”, “where to fuse”, “when to fuse”, and “how to fuse”, subsequently discussing various challenges and potential research directions within this domain. To ease the retrieval and comparison of datasets and fusion methods, we also provide an interactive website: https://radar-camera-fusion.github.io.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review


    Contributors:
    Yao, Shanliang (author) / Guan, Runwei (author) / Huang, Xiaoyu (author) / Li, Zhuoxiao (author) / Sha, Xiangyu (author) / Yue, Yong (author) / Lim, Eng Gee (author) / Seo, Hyungjoon (author) / Man, Ka Lok (author) / Zhu, Xiaohui (author)

    Published in:

    Publication date :

    2024-01-01


    Size :

    7620486 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

    El Madawi, Khaled / Rashed, Hazem / El Sallab, Ahmad et al. | IEEE | 2019


    Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving

    Banerjee, Koyel / Notz, Dominik / Windelen, Johannes et al. | IEEE | 2018



    Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios

    Zheng, Lianqing / Ai, Wenjin / Ma, Zhixiong | SAE Technical Papers | 2023