This paper presents a method for high-level decision making in traffic environments. In contrast to the usual approach of modeling decision policies by hand, a Markov Decision Process (MDP) is employed to plan the optimal policy by assessing the outcomes of actions. Using probability theory, decisions are deduced automatically from the knowledge about how road users behave over time. This approach does neither depend on an explicit situation recognition nor is it limited to only a variety of situations or types of descriptions. Hence it is versatile and powerful. The contribution of this paper is a mathematical framework to derive abstract symbolic states from complex continuous temporal models encoded as Dynamic Bayesian Networks (DBN). For this purpose discrete MDP states are interpreted by random variables. To make computation feasible this space grows adaptively during planning and according to the problem to be solved.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Probabilistic MDP-behavior planning for cars


    Beteiligte:
    Brechtel, S. (Autor:in) / Gindele, T. (Autor:in) / Dillmann, R. (Autor:in)


    Erscheinungsdatum :

    01.10.2011


    Format / Umfang :

    1862328 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Behavior Planning of Autonomous Cars with Social Perception

    Sun, Liting / Zhan, Wei / Chan, Ching-Yao et al. | IEEE | 2019


    BEHAVIOR PLANNING OF AUTONOMOUS CARS WITH SOCIAL PERCEPTION

    Sun, Liting / Zhan, Wei / Chan, Ching-Yao et al. | British Library Conference Proceedings | 2019




    Simulator for Planning Structured Networks in Cars

    Isernhagen, Rolf / Lawrenz, Wolfhard E. | SAE Technical Papers | 1990