Abstract The required learning time and curse of dimensionality restrict the applicability of Reinforcement Learning(RL) on real robots. Difficulty in inclusion of initial knowledge and understanding the learned rules must be added to the mentioned problems. In this paper we address automatic state abstraction and creation of hierarchies in RL agent’s mind, as two major approaches for reducing the number of learning trials, simplifying inclusion of prior knowledge, and making the learned rules more abstract and understandable. We formalize automatic state abstraction and hierarchy creation as an optimization problem and derive a new algorithm that adapts decision tree learning techniques to state abstraction. The proof of performance is supported by strong evidences from simulation results in nondeterministic environments. Simulation results show encouraging enhancements in the required number of learning trials, agent’s performance, size of the learned trees, and computation time of the algorithm. Keywords: State Abstraction, Hierarchical Reinforcement Learning


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Reduction of Learning Time for Robots Using Automatic State Abstraction


    Contributors:


    Publication date :

    2006-01-01


    Size :

    14 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Efficient Failure Detection for Mobile Robots Using Mixed-Abstraction Particle Filters

    Plagemann, Christian / Stachniss, Cyrill / Burgard, Wolfram | Springer Verlag | 2006


    Automatic home video abstraction using audio contents [4925-52]

    Zhao, M. / Chen, C. F. / Chen, C. et al. | British Library Conference Proceedings | 2002


    An Abstraction

    Emerald Group Publishing | 1951


    Domain-Abstraction-Based Approach for Learning Multidomain Planning

    Chae, Hyeok-Joo / Choi, Han-Lim | AIAA | 2021