Traditional visual texture features are unreliable for accurately establishing feature correlation between different-modal images and achieving robust visual localization for Autonomous aerial vehicles (AAVs) due to variations in imaging principles of different image sensors and other environmental factors. To address these challenges, this article starts by simulating the process of human perception in different modal images. It utilizes recursively gated deep convolutional neural networks to mine higher order and lower order image information patterns to obtain image features with more consistent representational capabilities. Second, this article establishes image feature associations for different modalities and scales by designing an efficient feature information interaction mechanism to obtain accurate geometric mapping between different modal images. Finally, it forms a visual localization technology for low-altitude AAVs with different modal images in outdoor scenarios. With these improvements, the proposed method in this article improves the localization error and efficiency by 16.01% and 19.52%, respectively, compared with the state-of-the-art methods in the experiments of natural scenes, which meets the practical application requirements of automatic geo-localization navigation for AAVs.
Deep Feature Matching of Different-Modal Images for Visual Geo-Localization of AAVs
IEEE Transactions on Aerospace and Electronic Systems ; 61 , 2 ; 2784-2801
2025-04-01
25105164 byte
Article (Journal)
Electronic Resource
English
Radar-Based Localization Using Visual Feature Matching
British Library Conference Proceedings | 2021
|