To effectively track resident space objects (RSOs), the tasking of sensors in a distributed network requires flexible and adaptive approaches. While Deep Reinforcement Learning (DRL) has shown potential in solving the sensor allocation problem (SAP), the resulting solutions lack transparency, and the combinatorial complexity of SAP remains uninterpretable. To address this, we propose a method that utilizes counterfactual explanation to explain the reasoning behind the DRL agents' RSO selection. This involves building an induced Bayesian network classifier (IBNC) based on the trained DRL agent and generating counterfactual explanations using the IBNC to answer questions like “why did the DRL agent select one RSO over another?”. We conducted simulation experiments to demonstrate the effectiveness of the DRL model and empirically show that counterfactual explanations provide useful information for understanding the decision-making process inside the DRL model, thereby leading to a responsible decision system.
Explainable Deep Reinforcement Learning for Space Situational Awareness: Counterfactual Explanation Approach
02.03.2024
4183540 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Cislunar Space Situational Awareness Sensor Tasking using Deep Reinforcement Learning Agents
British Library Conference Proceedings | 2022
|British Library Conference Proceedings | 2021
|Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
British Library Conference Proceedings | 2016
|Cooperative Space Situational Awareness
British Library Conference Proceedings | 2010
|Viral Space Situational Awareness
British Library Conference Proceedings | 2013
|