Distributed stochastic gradient descent techniques have gained significant attention in recent years as a prevalent approach for reinforcement learning. Current distributed learning predominantly employs synchronous or asynchronous training strategies. While the asynchronous scheme avoids idle computing resources present in synchronous methods, it grapples with the stale gradient issue. This paper introduces a novel gradient correction algorithm aimed at alleviating the stale gradient problem. By leveraging second-order information within the worker node and incorporating current parameters from both the worker and server nodes, the gradient correction algorithm yields a refined gradient closer to the desired value. Initially, we outline the challenges associated with asynchronous update schemes and derive a gradient correction algorithm employing local second-order approximations. Subsequently, we propose an asynchronous training scheme incorporating gradient correction within the generalized policy iteration framework. Lastly, in the context of trajectory tracking tasks, we compare the impact of employing gradient correction versus its absence in an asynchronous update scheme. Simulation results underscore the superiority of our proposed training scheme, demonstrating notably faster convergence and higher policy performance compared to the existing asynchronous update methods.


    Access

    Download


    Export, share and cite



    Title :

    Gradient Correction for Asynchronous Stochastic Gradient Descent in Reinforcement Learning


    Additional title:

    Lect.Notes Mechanical Engineering


    Contributors:

    Conference:

    Advanced Vehicle Control Symposium ; 2024 ; Milan, Italy September 01, 2024 - September 05, 2024



    Publication date :

    2024-10-04


    Size :

    7 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English





    Adaptive Stochastic Gradient Descent Optimisation for Image Registration

    Klein, S. / Pluim, J. P. / Staring, M. et al. | British Library Online Contents | 2009



    DECENTRALIZED POLICY GRADIENT DESCENT AND ASCENT FOR SAFE MULTI-AGENT REINFORCEMENT LEARNING

    LU SONGTAO / HORESH LIOR / CHEN PIN-YU et al. | European Patent Office | 2023

    Free access

    TDOA-Based Localization via Stochastic Gradient Descent Variants

    Abanto-Leon, Luis F. / Koppelaar, Arie / Heemstra de Groot, Sonia | IEEE | 2018