LiDAR has become a standard sensor for autonomous driving applications as they provide highly precise 3D point clouds. LiDAR is also robust for low-light scenarios at night-time or due to shadows where the performance of cameras is degraded. LiDAR perception is gradually becoming mature for algorithms including object detection and SLAM. However, semantic segmentation algorithm remains to be relatively less explored. Motivated by the fact that semantic segmentation is a mature algorithm on image data, we explore sensor fusion based 3D segmentation. Our main contribution is to convert the RGB image to a polar-grid mapping representation used for LiDAR and design early and mid-level fusion architectures. Additionally, we design a hybrid fusion architecture that combines both fusion algorithms. We evaluate our algorithm on KITTI dataset which provides segmentation annotation for cars, pedestrians and cyclists. We evaluate two state-of-the-art architectures namely SqueezeSeg and PointSeg and improve the mIoU score by 10% in both cases relative to the LiDAR only baseline.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving


    Contributors:


    Publication date :

    2019-10-01


    Size :

    896832 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    LiSeg: Lightweight Road-object Semantic Segmentation In 3D LiDAR Scans For Autonomous Driving

    Zhang, Wenquan / Zhou, Chancheng / Yang, Junjie et al. | IEEE | 2018


    Location-Guided LiDAR-Based Panoptic Segmentation for Autonomous Driving

    Xian, Guozeng / Ji, Changyun / Zhou, Lin et al. | IEEE | 2023