Using images captured by drone cameras and comparing them with known Google satellite maps to obtain the current location of the drone is an important way of UAV navigation in GPS-denied environments. But, due to inherent modality differences and significant geometric deformations, cross-modal image registration is challenging. This paper proposes a CNN-Transformer hybrid network model for feature detection and feature matching. ResNet50 is used as the backbone network for feature extraction. An improved feature fusion module is used to fuse feature maps from different levels, and then a Transformer encoder–decoder structure is used for feature matching to obtain preliminary correspondences. Finally, a geometric outlier removal method (GSM) is used to eliminate mismatched points based on the geometric similarity of inliers, resulting in more robust correspondences. Qualitative and quantitative experiments were conducted on multimodal image datasets captured by UAVs; the correct matching rate was improved by 52%, 21%, and 15%, respectively, and the error was reduced by 36% compared to the 3MRS algorithm. A total of 56 experiments were conducted in actual scenarios, with a localization success rate of 91.1%, and the RMSE of UAV positioning was 4.6 m.
A Multimodal Image Registration Method for UAV Visual Navigation Based on Feature Fusion and Transformers
2024
Aufsatz (Zeitschrift)
Elektronische Ressource
Unbekannt
Metadata by DOAJ is licensed under CC BY-SA 1.0
Incorporating global information in feature-based multimodal image registration
British Library Online Contents | 2014
|3D ultrasound registration-based visual servoing for neurosurgical navigation.
BASE | 2018
|Multimodal Functional and Morphological Nonrigid Image Registration
British Library Conference Proceedings | 2005
|