We present a model-free reinforcement learning algorithm to synthesize control policies that maximize the probability of satisfying high-level control objectives given as Linear Temporal Logic (LTL) formulas. Uncertainty is considered in the workspace properties, the structure of the workspace, and the agent actions, giving rise to a Probabilistically-Labeled Markov Decision Process (PL-MDP) with unknown graph structure and stochastic behaviour, which is even more general than a fully unknown MDP. We first translate the LTL specification into a Limit Deterministic Büchi Automaton (LDBA), which is then used in an on-the-fly product with the PL-MDP. Thereafter, we define a synchronous reward function based on the acceptance condition of the LDBA. Finally, we show that the RL algorithm delivers a policy that maximizes the satisfaction probability asymptotically. We provide experimental results that showcase the efficiency of the proposed method.


    Access

    Download


    Export, share and cite



    Title :

    Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees


    Contributors:
    Hasanbeig, M (author) / Kantaros, Y (author) / Abate, A (author) / Kroening, D (author) / Pappas, G (author) / Lee, I (author)

    Publication date :

    2019-09-13



    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    006 / 629





    Temporal Logic Guided Safe Model-Based Reinforcement Learning

    Cohen, Max / Belta, Calin | Springer Verlag | 2023


    Public transport trajectory planning with probabilistic guarantees

    Varga, Balázs / Tettamanti, Tamás / Kulcsár, Balázs et al. | Elsevier | 2020


    Systems control with generalized probabilistic fuzzy-reinforcement learning

    Hinojosa, J. / Nefti, S. / Kaymak, U. | BASE | 2011

    Free access