This article revisits methods based on global descriptors to estimate the pose of a known object using a monocular camera, in the context of space rendezvous between an autonomous spacecraft and a noncooperative target. These methods estimate the pose by detection, i.e., they do not require any prior information about the pose of the observed object, making them suitable for initial pose acquisition and the monitoring of faults in other on-board estimators. We consider here specifically methods that retrieve the pose of a known object using a precomputed set of invariants and geometric moments. Three classes of global invariant features are analyzed, based on complex moments, Zernike moments, and Fourier descriptors. The robustness, accuracy, and computational efficiency of the different invariants are tested and compared under various conditions. We also discuss certain implementation aspects of the method that lead to improved accuracy and efficiency over previously reported results. Overall, our results can be used to identify which variations of the method offer a sufficiently fast and robust solution for pose estimation by detection, with low computational requirements that are compatible with space-qualified processors.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Global Descriptors for Visual Pose Estimation of a Noncooperative Target in Space Rendezvous


    Beteiligte:


    Erscheinungsdatum :

    01.12.2021


    Format / Umfang :

    2871015 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    Pose Estimation of a Noncooperative Target Based on Monocular Visual SLAM

    Ting Lei / Xiao-Feng Liu / Guo-Ping Cai et al. | DOAJ | 2019

    Freier Zugriff

    Robust Model-Based Monocular Pose Initialization for Noncooperative Spacecraft Rendezvous

    Sharma, Sumant / Ventura, Jacopo / D’Amico, Simone | AIAA | 2018