Reinforcement learning constitutes a valuable framework for reward-based decision making in humans, as it breaks down learning into a few computational steps. These computations are embedded in a task representation that links together stimuli, actions, and outcomes, and an internal model that derives contingencies from explicit knowledge. Although research on reinforcement learning has already greatly advanced our insights into the brain, there remain many open questions regarding the interaction between reinforcement learning, task representations, and internal models. Through the combination of computational modelling, experimental manipulation, and electrophysiological recording, the three studies of this thesis aim to elucidate how task representations and internal models are shaped and how they affect reinforcement learning. In Study 1, the manipulation of action-outcome contingencies in a simple one-stage decision task allowed to investigate the impact of explicit knowledge about task learnability on reinforcement learning. The results highlight the flexible adjustment of internal models and the suppression of central computations of reinforcement learning when a task is represented as not learnable. Using a similar manipulation, Study 2 investigates how this influence of explicit knowledge on reinforcement learning holds under the increasing complexity of a two-stage environment. Again, pronounced neural differences between task conditions indicate separable computations of reinforcement learning and, more importantly, the selective influence of explicit knowledge and internal models on reinforcement learning. Study 3 uses a novel task design which necessitates inference about plausible action-outcome mappings, and thus, credit assignment. The findings suggest that multiple task representations are neurally conceptualized and compete for action selection, thereby solving the structural credit assignment problem. In sum, the studies of this thesis highlight the importance of reinforcement learning as a central biological principle and draw attention to the necessity of flexible interactions between reinforcement learning, task representations, and internal models to cope with the varying demands from the environment.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Electrophysiological correlates of reinforcement learning and credit assignment


    Beteiligte:
    Wurm, Franz (Autor:in)

    Erscheinungsdatum :

    05.11.2020


    Medientyp :

    Hochschulschrift


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629 / 150




    Flight Gate Assignment Problem with Reinforcement Learning

    Muhafız Yıldız, Müge / Avcı, Umut / Örnek, Mustafa Arslan et al. | Springer Verlag | 2023


    Combinatorial Reinforcement Learning of Linear Assignment Problems

    Hamzehi, Sascha / Bogenberger, Klaus / Franeck, Philipp et al. | IEEE | 2019


    Reinforcement Learning Based Decentralized Weapon-Target Assignment and Guidance

    Merkulov, Gleb / Iceland, Eran / Michaeli, Shay et al. | AIAA | 2024


    Weapon–Target Assignment by Reinforcement Learning with Pointer Network

    Na, Hyungho / Ahn, Jaemyung / Moon, Il-Chul | AIAA | 2023


    Multi-UAV Cooperative Target Assignment Method Based on Reinforcement Learning

    Yunlong Ding / Minchi Kuang / Heng Shi et al. | DOAJ | 2024

    Freier Zugriff