Accurate and robust vision-based pose estimation is essential for cooperative unmanned aerial vehicle (UAV) operations, particularly in formation flight and multi-UAV coordination, where precise relative positioning is critical to mission success. However, many existing systems rely on ac-tive sensors, limiting their applicability in environments with communication constraints, GNSS denial, or stealth require-ments. To overcome these limitations, recent studies have explored the use of passive sensors such as cameras. However, current methods, including marker-based and learning-based approaches, perform well under controlled conditions, but often struggle with viewpoint variability during dynamic maneuvers. To address these challenges, this paper presents the Viewpoint-Aware Pose Estimation (VAPE) framework, which enhances robustness across diverse viewpoints while operating with passive vision sensors. VAPE integrates viewpoint classification, robust feature matching using pre-trained models, and spatial feature distribution analysis to establish accurate 2D-3D correspondences without the need for specialized markers or extensive feature annotation. Ground tests simulating formation maneuvers demonstrate that VAPE maintains reliable tracking performance, achieving mean absolute position errors below 2.5 % and angular errors below 5°, indicating its potential for real-world UAV coordination tasks.
VAPE: Viewpoint-Aware Pose Estimation Framework for Cooperative UAV Formation
14.05.2025
3529602 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Viewpoint-aware object detection and continuous pose estimation
British Library Online Contents | 2012
|Cooperative Pose Estimation in a Robotic Swarm: Framework, Simulation and Experimental Results
Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2022
|Kalman Filter-Aware Air-Ground Cooperative System Target Pose with Noise
Springer Verlag | 2024
|Vision-based pose estimation for cooperative space objects
Online Contents | 2013
|Accurate Pose Estimation Based on Multi-frame Cooperative Identification
Springer Verlag | 2022
|