This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).


    Access

    Download


    Export, share and cite



    Title :

    Trajectory-Based Modified Policy Iteration


    Contributors:
    R. Sharma (author) / M. Gopal (author)

    Publication date :

    2007-12-20


    Remarks:

    oai:zenodo.org:1083823



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    006 / 629





    Iteration procedures for indirect trajectory optimization methods.

    Lewallen, J. M. / Tapley, B. D. / Williams, S. D. | NTRS | 1968



    AUTONOMOUS SOARING POLICY INITIALIZATION THROUGH VALUE ITERATION

    Rothaupt, Benjamin J. / Notter, Stefan / Fichter, Walter | TIBKAT | 2021


    Autonomous Soaring Policy Initialization Through Value Iteration

    Rothaupt, Benjamin J. / Notter, Stefan / Fichter, Walter | AIAA | 2021