Bird's Eye View (BEV) based perception algorithms are receiving increasing attention in the field of autonomous driving, how to more effectively utilize multi-sensor data in BEV is a challenge. In this paper, we propose PETRFusion, a BEV segmentation network that integrates multiple cameras with a LiDAR sensor. We focus on the task of BEV segmentation, designing the network architecture based on the similarity between LiDAR point clouds and semantic segmentation maps. The proposed method uses point cloud features as queries for semantic segmentation and employs Transformers to fuse image features with point cloud features, thereby maximizing the preservation of 3D spatial information from the point cloud. Additionally, we enhance the 3D positional encoder of PETR to provide more reliable 3D image features. We conduct detailed validation on the nuScenes dataset and achieve state-of-the-art results.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    PETRFusion: Multi-Sensor Fusion Based BEV Semantic Segmentation Network for Autonomous Driving


    Beteiligte:
    Zhao, Yang (Autor:in) / Du, Jingyu (Autor:in) / Yang, Qihang (Autor:in) / Cai, Ningze (Autor:in) / Peng, Zhinan (Autor:in) / Zhan, Huiqin (Autor:in) / Cheng, Hong (Autor:in)


    Erscheinungsdatum :

    24.09.2024


    Format / Umfang :

    521931 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    MLRFNet: Multi-Level Real-Time Fusion Semantic Segmentation Network for Autonomous Driving

    Ma, Xiaochuan / Xun, Zhijie / Mao, Bomin et al. | IEEE | 2025



    RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

    El Madawi, Khaled / Rashed, Hazem / El Sallab, Ahmad et al. | IEEE | 2019