The object population in the space around the Earth is subject to increase. With the advancements in sensor capabilities, it can be expected that, at the same time, more of those objects will be detected. Although this allows for significant advances in understanding of the detectable objects and the expansion of object catalogs, it also leads to significant stress on the sensor systems and makes efficient sensor tasking a prime challenge. To solve sensor tasking as an optimization problem of observing objects when a priori information is available, various methods exist. Classical methods rely on the problem being formulated in a convex representation. Computationally intensive methods based on artificial intelligence, such as machine learning, have recently gained a lot of attention and are suitable for problems even when no convex formulation can be found. In this paper, performances of a simple traditional greedy algorithm and the more complex Weapon-Target Assignment algorithm are compared with the performance of two machine learning algorithms: ant colony and distributed Q-learning. Ant colony optimization is a swarm optimization path finding methodology based on probabilistic principles; distributed Q-learning aims to find an optimal policy by maximizing the expected reward received. As an application case the observation of known objects in the geosynchronous region with a ground-based sensor is used, and performance is evaluated in terms of the number of objects successfully tracked, the computational efficiency of running the algorithms, and the difficulty of tuning the algorithms. The ant colony solutions track the most objects, whereas the greedy algorithm is the most efficient; additionally, the ant colony and distributed Q-learning require significant tuning of the algorithms before employment.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Space Situational Awareness Sensor Tasking: Comparison of Machine Learning with Classical Optimization Methods


    Contributors:

    Published in:

    Publication date :

    2020-02-01




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning

    Linares, Richard | British Library Conference Proceedings | 2016


    Cislunar Space Situational Awareness Sensor Tasking using Deep Reinforcement Learning Agents

    Siew, Peng Mun | British Library Conference Proceedings | 2022


    SSA Sensor Tasking: Comparison of Machine Learning with Classical Optimization Methods

    Little, Bryan D. | British Library Conference Proceedings | 2018


    Mutual Information Based Sensor Tasking with Applications to Space Situational Awareness

    Adurthi, Nagavenkat / Singla, Puneet / Majji, Manoranjan | AIAA | 2020