Temporal Difference Learning is an important class of incremental learning procedures which learn to predict outcomes of sequential processes through experience. Although these algorithms have been used in a variety of notorious intelligent systems such as Samuel's checker-player and Tesauro's Backgammon program, their convergence properties remain poorly understood. This paper provides a brief summary of the theoretical basis for these algorithms and documents observed convergence performance in a variety of experiments. The implications of these results are also briefly discussed.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Convergence behavior of temporal difference learning


    Contributors:


    Publication date :

    1996-01-01


    Size :

    498793 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Covergence Behavior of Temporal Difference Learning

    Malhotra, R. P. / IEEE; Dayton Section / IEEE; Aerospace and Electronics Systems Society | British Library Conference Proceedings | 1996


    Collision Probability Distribution Estimation via Temporal Difference Learning

    Steinecker, Thomas / Luettel, Thorsten / Maehlisch, Mirko | IEEE | 2024


    Adaptive UAV Swarm Mission Planning by Temporal Difference Learning

    Gopalakrishnan, Shreevanth Krishnaa / Al-Rubaye, Saba / Inalhan, Gokhan | IEEE | 2021


    Intersection traffic control optimization method based on temporal difference learning

    FANG ZHONGLIANG / XU REN / LIU LIANG et al. | European Patent Office | 2021

    Free access