Inspired by the ideas behind superpixels, which segment an image into homogenous regions to accelerate subsequent processing steps (e.g. tracking), we present a sensor-fusion-based segmentation approach that generates dense depth regions referred to as supersurfaces. This method aggregates both a point cloud from a LiDAR and an image from a camera to provide an over-segmentation of the three-dimensional scene into piece-wise planar surfaces by utilizing a multi-label Markov Random Field (MRF). A comparison between this method that generates supersurfaces, image-based superpixels, and RGBD-based segments using a subset of KITTI dataset is provided in the experimental results. We observed that supersurfaces are less redundant and more accurate in terms of average boundary recall for a fixed number of segments.
Region segmentation using LiDAR and camera
01.10.2017
1385889 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
M2S-RoAD: Multi-Modal Semantic Segmentation for Road Damage Using Camera and LiDAR Data
ArXiv | 2025
|LIDAR-Camera Fusion Where LIDAR and Camera Validly See Different Things
Europäisches Patentamt | 2022
|LiDAR MERGING LiDAR INFORMATION AND CAMERA INFORMATION
Europäisches Patentamt | 2022
|LiDAR MERGING LiDAR INFORMATION AND CAMERA INFORMATION
Europäisches Patentamt | 2022
|