This article presents the Spacecraft Pose Network (SPN), the first neural network-based method for on-board estimation of the pose, i.e., the relative position and attitude, of a known noncooperative spacecraft using monocular vision. In contrast to other state-of-the-art pose estimation approaches for spaceborne applications, the SPN method does not require the formulation of hand-engineered features and only requires a single grayscale image to determine the pose of the spacecraft relative to the camera. The SPN method uses a convolutional neural network (CNN) with three branches to solve the problem of relative attitude estimation. The first branch of the CNN bootstraps a state-of-the-art object detection algorithm to detect a 2-D bounding box around the target spacecraft in the input image. The region inside the 2-D bounding box is then used by the other two branches of the CNN to determine the relative attitude by initially classifying the input region into discrete coarse attitude labels before regressing to a finer estimate. The SPN method then estimates the relative position by using the constraints imposed by the detected 2-D bounding box and the estimated relative attitude. Further, with the detection of 2-D bounding boxes of subcomponents of the target spacecraft, the SPN method is easily generalizable to estimate the pose of multiple target geometries. Finally, to facilitate integration with navigation filters and perform continuous pose tracking, the SPN method estimates the uncertainty associated with the estimated pose. The secondary contribution of this article is the generation of the Spacecraft PosE Estimation Dataset (SPEED), which is used to train and evaluate the performance of the SPN method. SPEED consists of synthetic as well as actual camera images of a mock-up of the Tango spacecraft from the PRISMA mission. The synthetic images are created by fusing OpenGL-based renderings of the spacecraft's 3-D model with actual images of the Earth captured by the Himawari-8 meteorological satellite. The actual camera images are created using a seven degrees-of-freedom robotic arm, which positions and orients a vision-based sensor with respect to a full-scale mock-up of the Tango spacecraft with submillimeter and submillidegree accuracy. The SPN method, trained only on synthetic images, produces degree-level relative attitude error and cm-level relative position errors when evaluated on the actual camera images with a different distribution not used during training.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous


    Contributors:


    Publication date :

    2020-12-01


    Size :

    7926548 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Robust Model-Based Monocular Pose Initialization for Noncooperative Spacecraft Rendezvous

    Sharma, Sumant / Ventura, Jacopo / D’Amico, Simone | AIAA | 2018


    Position Awareness Network for Noncooperative Spacecraft Pose Estimation Based on Point Cloud

    Liu, Xiang / Wang, Hongyuan / Chen, Xinlong et al. | IEEE | 2023


    Global Descriptors for Visual Pose Estimation of a Noncooperative Target in Space Rendezvous

    Comellini, Anthea / Le Ny, Jerome / Zenou, Emmanuel et al. | IEEE | 2021