We present a model-free reinforcement learning algorithm to synthesize control policies that maximize the probability of satisfying high-level control objectives given as Linear Temporal Logic (LTL) formulas. Uncertainty is considered in the workspace properties, the structure of the workspace, and the agent actions, giving rise to a Probabilistically-Labeled Markov Decision Process (PL-MDP) with unknown graph structure and stochastic behaviour, which is even more general than a fully unknown MDP. We first translate the LTL specification into a Limit Deterministic Büchi Automaton (LDBA), which is then used in an on-the-fly product with the PL-MDP. Thereafter, we define a synchronous reward function based on the acceptance condition of the LDBA. Finally, we show that the RL algorithm delivers a policy that maximizes the satisfaction probability asymptotically. We provide experimental results that showcase the efficiency of the proposed method.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees


    Beteiligte:
    Hasanbeig, M (Autor:in) / Kantaros, Y (Autor:in) / Abate, A (Autor:in) / Kroening, D (Autor:in) / Pappas, G (Autor:in) / Lee, I (Autor:in)

    Erscheinungsdatum :

    13.09.2019



    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Klassifikation :

    DDC:    629 / 006



    Reinforcement learning with guarantees

    Osinenko, Pavel Valerevich | TIBKAT | 2024

    Freier Zugriff



    Temporal Logic Guided Safe Model-Based Reinforcement Learning

    Cohen, Max / Belta, Calin | Springer Verlag | 2023


    Sampling Policy that Guarantees Reliability of Optimal Policy in Reinforcement Learning

    Senda, K. / Iwasaki, Y. / Fujii, S. | British Library Online Contents | 2010