To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI [3] dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation


    Contributors:


    Publication date :

    2022-06-05


    Size :

    2890019 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    LEARNING POINT CLOUD AUGMENTATION POLICIES

    SONG YANG / CHENG SHUYANG / GUO ZIJIAN et al. | European Patent Office | 2021

    Free access

    Leveraging Smooth Deformation Augmentation for LiDAR Point Cloud Semantic Segmentation

    Qiu, Shoumeng / Chen, Jie / Lai, Chenghang et al. | IEEE | 2024

    Free access

    To what depth can a submarine be seen from the air?

    Genova, A. | Engineering Index Backfile | 1929



    Alcohol and Driving: Why We have Seen Changes and What is Next? In France

    Biecheler, M.-B. / Jaye, M.-C. / Vag-och transport-forskningsinstitutet | British Library Conference Proceedings | 1992