There is a surge in the automation of vehicles, with the prime focus on improving road safety. In general, robots are expected to operate in a continuously evolving environment and understanding the underlying causes for the changes is a key enabler to achieve the desired level of automation. The changes are intertwined in the motion and semantic characteristics of various entities of the environment. In this thesis, we propose different methods to infer these characteristics, with the final objective of holistic scene understanding using 3D LiDAR data. Robots are often operating in non-static environments, sharing the space with various dynamic objects. For safe and efficient navigation, it is necessary to detect such objects and furthermore predict their future state. To address these challenges, we delve into the problem of estimating motion models, with the objective of understanding the dynamic characteristics, solely based on the estimated motion. To estimate motion, it is necessary to observe the same parts of the environment more than once and find an association between them. Targeting this problem, we propose a local feature descriptor learned from 3D LiDAR scans using a deep convolutional neural network. Having such a descriptor enables finding correspondences between keypoints and paves the way for estimating motion models. In this thesis, we propose a novel method for detection and tracking of dynamic objects. In an iterative fashion, we estimate rigid motion models for various objects in the scene and then detect dynamic objects solely based on the motion. For tracking, we again utilize the motion information for finding an association between objects in consecutive scans. This method implicitly assumes a scene can be decomposed into a set of objects and to infer dynamic characteristics at a finer granular level, we propose a novel method for estimating a dense rigid motion field. This method relies on the sole assumption that objects are locally rigid and is capable of estimating arbitrary different motions for both rigid and non-rigid objects. To infer the motion state of points in a LiDAR scan, we propose a hidden Markov model based method which uses the motion field as a measurement source. The dynamic characteristics are closely related to the semantic properties of different objects in the environment. To extract those, we propose a deep convolutional neural network for semantic segmentation of 3D LiDAR scans. The data collected by a LiDAR scanner or an other sensor is sequential, which we leverage by using a Bayes filter approach to make these semantic predictions temporally consistent. The filter utilizes the prediction of the network from the current and previous scans, thereby making the system robust to isolated incorrect predictions from the network. To exploit the inherent relationship between motion and semantic properties, we propose a novel approach to classify points in a LiDAR scan as non-movable, movable or dynamic. This approach seamlessly combines the motion and learned semantic cues, allowing proper scene understanding.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Leveraging motion and semantic cues for 3D scene understanding


    Beteiligte:
    Dewan, Ayush (Autor:in)

    Erscheinungsdatum :

    2020-01-01



    Medientyp :

    Hochschulschrift


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Schlagwörter :

    Klassifikation :

    DDC:    004 / 629





    Bayesian Fusion of Camera Metadata Cues in Semantic Scene Classification

    Boutell, M. / Luo, J. / IEEE Computer Society | British Library Conference Proceedings | 2004


    Rainy Night Scene Understanding With Near Scene Semantic Adaptation

    Di, Shuai / Feng, Qi / Li, Chun-Guang et al. | IEEE | 2021


    Leveraging multiple cues for recognizing family photos

    Wang, Xiaolong / Guo, Guodong / Merler, Michele et al. | British Library Online Contents | 2017