This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Trajectory-Based Modified Policy Iteration


    Beteiligte:
    R. Sharma (Autor:in) / M. Gopal (Autor:in)

    Erscheinungsdatum :

    20.12.2007


    Anmerkungen:

    oai:zenodo.org:1083823



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629 / 006




    A policy iteration method for improving robot assembly trajectory efficiency

    ZHANG, Qi / XIE, Zongwu / CAO, Baoshi et al. | Elsevier | 2023

    Freier Zugriff

    Iteration procedures for indirect trajectory optimization methods.

    Lewallen, J. M. / Tapley, B. D. / Williams, S. D. | NTRS | 1968



    Iteration procedures for indirect trajectory optimization methods

    Lewallen, J.M. / Tapley, B.D. / Williams, S.D. | Engineering Index Backfile | 1968