Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Spike-based decision learning of nash equilibria in two-player games
2012-01-01
Friedrich, Johannes; Senn, Walter (2012). Spike-based decision learning of nash equilibria in two-player games. PLoS computational biology, 8(9), pp. 1-12. San Francisco, Calif.: Public Library of Science 10.1371/journal.pcbi.1002691
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Interval methods for computing strong Nash equilibria of continuous games
BASE | 2016
|Nonzero-Sum Submodular Monotone-Follower Games. Existence and Approximation of Nash Equilibria
BASE | 2019
|