Temporal Difference Learning is an important class of incremental learning procedures which learn to predict outcomes of sequential processes through experience. Although these algorithms have been used in a variety of notorious intelligent systems such as Samuel's checker-player and Tesauro's Backgammon program, their convergence properties remain poorly understood. This paper provides a brief summary of the theoretical basis for these algorithms and documents observed convergence performance in a variety of experiments. The implications of these results are also briefly discussed.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Convergence behavior of temporal difference learning


    Beteiligte:
    Malhotra, R.P. (Autor:in)


    Erscheinungsdatum :

    01.01.1996


    Format / Umfang :

    498793 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Covergence Behavior of Temporal Difference Learning

    Malhotra, R. P. / IEEE; Dayton Section / IEEE; Aerospace and Electronics Systems Society | British Library Conference Proceedings | 1996


    Collision Probability Distribution Estimation via Temporal Difference Learning

    Steinecker, Thomas / Luettel, Thorsten / Maehlisch, Mirko | IEEE | 2024


    Adaptive UAV Swarm Mission Planning by Temporal Difference Learning

    Gopalakrishnan, Shreevanth Krishnaa / Al-Rubaye, Saba / Inalhan, Gokhan | IEEE | 2021


    Intersection traffic control optimization method based on temporal difference learning

    FANG ZHONGLIANG / XU REN / LIU LIANG et al. | Europäisches Patentamt | 2021

    Freier Zugriff