Using neural networks as function approximators in temporal difference reinforcement problems proved to be very effective in dealing with high-dimensionality of input state space, especially in more recent developments such as Deep Q-learning. These approaches share the use of a mechanism, called experience replay, that uniformly samples the previous experiences to a memory buffer to exploit them to re-learn, thus improving the efficiency of the learning process. In order to increase the learning performance, techniques such as prioritized experience and prioritized sampling have been introduced to deal with storing and replaying, respectively, the transitions with larger TD error. In this paper, we present a concept, called Attention-Based Experience REplay (ABERE), concerned with selective focusing of the replay buffer to specific types of experiences, therefore modeling the behavioral characteristics of the learning agent in a single and multi-agent environment. We further explore how different behavioral characteristics influence the performance of agents faced with dynamic environment that is able to become more hostile or benevolent by changing the relative probability to get positive or negative reinforcement.


    Access

    Download


    Export, share and cite



    Title :

    Attention-based experience replay in deep Q-learning



    Publication date :

    2017-01-01



    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    Trip Replay Experience

    GAYNOR PHILLIP KING | European Patent Office | 2018

    Free access

    Trip replay experience

    GAYNOR PHILLIP KING | European Patent Office | 2018

    Free access


    Failed Goal Aware Hindsight Experience Replay

    Kim, Taeyoung / Jeong, Haechan / Har, Dongsoo | Springer Verlag | 2024


    Experience Replay Enhances Excitation Condition of Neural-Network Adaptive Control Learning

    Qu, Chaoran / Cheng, Lin / Gong, Shengping et al. | AIAA | 2025