State-of-the-art approaches for the semantic labeling of LiDAR point clouds heavily rely on the use of deep Convolutional Neural Networks (CNNs). However, transferring network architectures across different LiDAR sensor types represents a significant challenge, especially due to sensor specific design choices with regard to network architecture as well as data representation. In this paper we propose a new CNN architecture for the point-wise semantic labeling of LiDAR data which achieves state-of-the-art results while increasing portability across sensor types. This represents a significant advantage given the fast-paced development of LiDAR hardware technology. We perform a thorough quantitative cross-sensor analysis of semantic labeling performance in comparison to a state-of-the-art reference method. Our evaluation shows that the proposed architecture is indeed highly portable, yielding an improvement of 10 percentage points in the Intersectionover-Union (IoU) score when compared to the reference approach. Further, the results indicate that the proposed network architecture can provide an efficient way for the automated generation of large-scale training data for novel LiDAR sensor types without the need for extensive manual annotation or multi-modal label transfer.
Analyzing the Cross-Sensor Portability of Neural Network Architectures for LiDAR-based Semantic Labeling
2019-10-01
2107439 byte
Conference paper
Electronic Resource
English
Software portability in open architectures
IEEE | 2001
|Software Portability in Open Architectures
British Library Conference Proceedings | 2001
|Patch-Based Semantic Labeling of Road Scene Using Colorized Mobile LiDAR Point Clouds
Online Contents | 2016
|Patch-Based Semantic Labeling of Road Scene Using Colorized Mobile LiDAR Point Clouds
Online Contents | 2015
|