Outdoor robot navigation describes the task of finding a drivable route to a predefined goal under consideration of uncertain and only partially observable environment information. Typically, the long-term destination is defined by coordinates in a global map. A trajectory planner carries out local obstacle avoidance in order to reach that goal. To this end, a simplification of the environment, such as a Cartesian occupancy grid, is generated, which represents each entity of the environment as an obstacle. As a result each object, regardless of its type, such as tree, person or car, is abstracted as an obstacle. Thus, relevant information about the object is lost in the process. During map generation, data association and mapping routines constantly evaluate every object in the environment. I.e. a large amount of processing power is spent to map all obstacles, independent of their relevance for the actual planning task. Local planning and obstacle avoidance mechanisms then operate on those simplified environment representations. Due to this simplification they generate paths without an understanding of the environment they are meant to navigate in, which results in the decoupling of perception and action planning. The presented approach, which is called object-related navigation, tries to amend some of the weaknesses that accompany global, map-based navigation approaches. Instead of incorporating the entire environment in the form of a heavily abstracted environment map into the planning routines, classified objects are considered, whose relevance for the task at hand is determined. They are directly incorporated into the navigation and reasoning algorithms in order to narrow down the gap between perception, planning and robot control. Relative spatial information, which is gained by exploiting the local sensor data of the robot is directly conveyed to the planning and control routines instead of transforming the information into a global representation. By training motion models per driving maneuver, such as “overtake” or “turn left”, robot motion generation and object motion prediction in relation to other objects is achieved. These models allow to form plans of the sort: “follow lane, then turn left on next crossroad, .”, which simplifies the reasoning routines and allows plans to be directly conveyed to a human observer and vice versa. The results showed that trained maneuver models were able to steer the autonomous robot Munich Cognitive Autonomous Robot Car 3rd Generation (MuCAR-3), when a route description of the form: “follow lane, then turn left on crossroad, .” was provided. Furthermore, the ability of the trained models to predict the motion of the robot and other traffic participants and their ability to predict real world percepts was evaluated.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Towards object-related navigation for mobile robots


    Beteiligte:

    Erscheinungsdatum :

    2020-01-01


    Anmerkungen:

    Roskopf, André: Towards object-related navigation for mobile robots. 2020


    Medientyp :

    Hochschulschrift


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    Towards object-related navigation for mobile robots

    Roskopf, André / Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrttechnik | TIBKAT | 2020


    Object-related-navigation for mobile robots

    Mueller, Andre / Wuensche, Hans-Joachim | IEEE | 2012


    Object-Related-Navigation for Mobile Robots

    Mueller, A. / Wuensche, H.J. / Institute of Electrical and Electronics Engineers | British Library Conference Proceedings | 2012


    Mobile Robots : Perception & Navigation

    Kolski, Sascha | TIBKAT | 2007

    Freier Zugriff

    Mobile Robots : Perception & Navigation

    Kolski, Sascha | GWLB - Gottfried Wilhelm Leibniz Bibliothek | 2007

    Freier Zugriff