This paper details progress within the Multi-Agent Reinforcement Learning (MARL) research area with application to agent decision processing in complex battle-space scenarios, including air, surface, sub-surface, and space domains. We implement a Double Deep Q-Network (DDQN) with Minimax Q-Learning in order to model simultaneous, zero-sum, two team engagements involving multiple Blue agents & Red opponents. This is a game theoretic approach that models both ally and opponent policies while viewing a battle as a Multi-Stage Markov Stochastic Game (MSMSG). We contrast our agent with a DDQN + Traditional Q-Learning algorithm in a single stage 2v1 battle scenario with mixed optimal strategies. In order to help mitigate learning sensitivities and local optima convergence, we implement a Genetic Programming (GP) algorithm, which outperforms both the Minimax Q-Learning and Traditional Q-Learning DDQN agents trained using traditional stochastic gradient descent in a dynamic 1v1 battle. Lastly, we create a hybrid approach that combines stochastic gradient descent learning (Minimax Q-Learning) and gradient-free learning (GP) and apply our hybrid approach within the StarCraft II (SC2) 3m map, which simulates a 3v3 battle. We contrast this hybrid MARL approach with another state-of-the-art MARL method (QMIX) for the SC2 3m combat scenario.
Genetic Programming + Multi-Agent Reinforcement Learning: Hybrid Approaches for Decision Processes
05.03.2022
1853678 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Intersection decision-making method based on multi-agent deep reinforcement learning
Europäisches Patentamt | 2024
|Genetic-Algorithm-Aided Deep Reinforcement Learning for Multi-Agent Drone Delivery
DOAJ | 2024
|Decision-Making for Priority Vehicle Transit Based on Multi-agent Reinforcement Learning
Springer Verlag | 2025
|