Computer vision is a prominent component of many spaceflight applications including terrain relative navigation, target tracking, hazard detection, pose estimation, and science data analysis, among others. However, due to processing and memory limitations of the radiation-hardened flight computing infrastructure, onboard algorithms for such applications that have been deployed thus far are limited in functionality, relying upon simple traditional computer vision approaches. Future generations of missions that propose advanced science goals require a level of spacecraft autonomy that has not yet been demonstrated in flight and will include the need for more state-of-the-art computer-vision techniques. Terrestrially, the current state-of-the-art consists of deep-learning-based approaches that are extremely compute-intensive, often requiring dedicated Graphics Processing Units (GPUs) for execution. Such methods are extremely difficult or completely infeasible for in-situ deployment. To alleviate this issue, novel spacecraft computing architectures, such as the SpaceCube Low-power Edge Artificial Intelligence Resilient Node (SC-LEARN) developed by NASA Goddard Space Flight Center (GSFC), have been introduced with a focus on accelerating model inference with minimum power consumption. However, certain models may be incompatible or unable to benefit from SC-LEARN and other such accelerators due to missing or unsupported operations required by complex vision architectures. This study presents a detailed quantification of this effect using the latest generation of stateof-the-art space processors from the GSFC SpaceCube family. Correlations are drawn between types of vision-based deeplearning architectures and their performance concerning execution time, memory overhead, and power consumption during inference. 18 models are evaluated across three SpaceCube flight platforms to build a comprehensive understanding of which vision-based deep learning techniques are realistically deployable with the current generation of spacecraft. These results provide the insights necessary to make advanced computer vision algorithms tractable on space platforms for deep learning practitioners and avionics engineers alike.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Profiling Vision-based Deep Learning Architectures on NASA SpaceCube Platforms


    Contributors:


    Publication date :

    2024-03-02


    Size :

    1117670 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English