Recent advances in deep reinforcement learning (RL) have led to considerable progress in many 2-player zero-sum games, such as Go, Poker and Starcraft. The purely adversarial nature of such games allows for conceptually simple and principled application of RL methods. However real-world settings are many-agent, and agent interactions are complex mixtures of common-interest and competitive aspects. We consider Diplomacy, a 7-player board game designed to accentuate dilemmas resulting from many-agent interactions. It also features a large combinatorial action space and simultaneous moves, which are challenging for RL algorithms. We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves. We also introduce a family of policy iteration methods that approximate fictitious play. With these methods, we successfully apply RL to Diplomacy: we show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.
Learning to Play No-Press Diplomacy with Best Response Policy Iteration
2020-12-12
In: Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020). NeurIPS (2020) (In press).
Paper
Electronic Resource
English
DDC: | 629 |
Manipulating the Distributions of Experience used for Self-Play Learning in Expert Iteration
BASE | 2020
|PARTE PRIMA - Gunboat Diplomacy - Tomahawk Diplomacy ?
Online Contents | 2002
AUTONOMOUS SOARING POLICY INITIALIZATION THROUGH VALUE ITERATION
TIBKAT | 2021
|