Aerospace battle scenarios represent a challenging modeling effort, often requiring large, continuous, and simultaneous state and/ or action spaces with imperfect information. We model a battle as a Multi-Stage Markov Stochastic Game (MSMSG) and facilitate agent decision making using a Double Deep Q-Network (DDQN) paradigm with Minimax Q-Learning. We demonstrate our model performance in contrast with a DDQN agent trained using a traditional Q-learning algorithm in a 1D dynamic battle environment. Preliminary findings suggest that the DDQN + Minimax-Q agent is more robust to parameter tuning and can learn true optimal mixed strategies compared to its traditional Q-learning counterpart.
Agent Decision Processes Using Double Deep Q-Networks + Minimax Q- Learning
2021-03-06
7037750 byte
Conference paper
Electronic Resource
English