Position estimation of the surrounding objects seen by the sensors mounted on an autonomous vehicle is a key module and it is typically carried out with the camera-lidar fusion owing to the high accuracy in depth estimation from lidar point cloud. In typical automotive LIDAR with 64 scanner points or less, at distances above 100 m, the object detection with LIDAR is not dependable as the number of LIDAR clusters will be sparse, while the high-resolution camera can offer better detection even at the distances above 100 m. Calculation of the position can be best achieved if there is a reliable means to get the corresponding LIDAR points for the detection in camera. To address this, we are proposing a novel a grid-based approach, in which a grid is created in the point cloud by calculating object’s position derived from camera detections. The correspondence between Camera pixels and LIDAR point cloud tends to suffer when the object of interest is occluded (eg.by other vehicles, guard rails, poles) or when there are false detections from the camera object detection module (eg. due to mirror reflections). Our proposed approach is a novel grid-based approach based on fusion of camera object detection and panoptic segmentation, which is then associated with lidar point cloud data and lidar object detections for accurate distance estimation. We take into consideration the occlusion level of the camera detected objects with the help of panoptic segmentation of the image frames and only the lidar points corresponding to the actual visible points of the object is used further for fusion and distance estimation. The panoptic segmentation provides both instance and semantic segmentation and helps in identifying the visible points during an occlusion of similar class of objects. This approach helped in removing the lidar points of the static and background objects projected on the camera detection bounding boxes, which in turn helped in identifying valid clusters for distance estimation in the fusion algorithm. Estimated positions from the camera 2D detections are then associated with the lidar detections by introducing closest Euclidean distance. We evaluated the algorithm in a custom dataset and observed 28% increase in recall rates compared to the lidar fusion using camera object detection alone approach.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Panoptic Based Camera and Lidar Fusion for Distance Estimation in Autonomous Driving Vehicles


    Weitere Titelangaben:

    Sae Technical Papers


    Beteiligte:
    P, Aparna M (Autor:in) / Thayyil Ravi, Arunkrishna (Autor:in) / Jose, Edwin (Autor:in) / Rajan, Manoj (Autor:in) / Patil, Mrinalini (Autor:in)

    Kongress:

    10TH SAE India International Mobility Conference ; 2022



    Erscheinungsdatum :

    05.10.2022




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Print


    Sprache :

    Englisch




    Panoptic Based Camera and Lidar Fusion for Distance Estimation in Autonomous Driving Vehicles

    Jose, Edwin / P, Aparna M / Patil, Mrinalini et al. | British Library Conference Proceedings | 2022


    Panoptic Based Camera and Lidar Fusion for Distance Estimation in Autonomous Driving Vehicles

    Jose, Edwin / P, Aparna M / Patil, Mrinalini et al. | British Library Conference Proceedings | 2022


    Location-Guided LiDAR-Based Panoptic Segmentation for Autonomous Driving

    Xian, Guozeng / Ji, Changyun / Zhou, Lin et al. | IEEE | 2023


    LiDAR-PDP: A LiDAR-Based Panoptic Dynamic Driving Environment Perception Algorithm

    Wang, Hai / Li, Jianguo / Cai, Yingfeng et al. | IEEE | 2025


    Depth-Aware Panoptic Segmentation with Mask Transformers and Panoptic Bins for Autonomous Driving

    Petrovai, Andra / Miclea, Vlad-Cristian / Nedevschi, Sergiu | IEEE | 2024