Using neural networks as function approximators in temporal difference reinforcement problems proved to be very effective in dealing with high-dimensionality of input state space, especially in more recent developments such as Deep Q-learning. These approaches share the use of a mechanism, called experience replay, that uniformly samples the previous experiences to a memory buffer to exploit them to re-learn, thus improving the efficiency of the learning process. In order to increase the learning performance, techniques such as prioritized experience and prioritized sampling have been introduced to deal with storing and replaying, respectively, the transitions with larger TD error. In this paper, we present a concept, called Attention-Based Experience REplay (ABERE), concerned with selective focusing of the replay buffer to specific types of experiences, therefore modeling the behavioral characteristics of the learning agent in a single and multi-agent environment. We further explore how different behavioral characteristics influence the performance of agents faced with dynamic environment that is able to become more hostile or benevolent by changing the relative probability to get positive or negative reinforcement.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Attention-based experience replay in deep Q-learning



    Erscheinungsdatum :

    01.01.2017



    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    Trip Replay Experience

    GAYNOR PHILLIP KING | Europäisches Patentamt | 2018

    Freier Zugriff

    Trip replay experience

    GAYNOR PHILLIP KING | Europäisches Patentamt | 2018

    Freier Zugriff


    Failed Goal Aware Hindsight Experience Replay

    Kim, Taeyoung / Jeong, Haechan / Har, Dongsoo | Springer Verlag | 2024


    Experience Replay Enhances Excitation Condition of Neural-Network Adaptive Control Learning

    Qu, Chaoran / Cheng, Lin / Gong, Shengping et al. | AIAA | 2025