In this paper, we propose a deep neural-network based regression approach, combined with a 3D structure based computer vision method, to solve the relative camera pose estimation problem for autonomous navigation of UAVs. Different from existing learning-based methods that train and test camera pose estimation in the same scene, our method succeeds in estimating relative camera poses across various urban scenes via a single trained model. We also built a Tuebingen Buildings database of RGB images collected by a drone in eight urban scenes. Over 10,000 images with corresponding 6DoF poses as well as 300,000 image pairs with their relative translational and rotational information are included in the dataset. We evaluate the accuracy of our method in the same scene and across scenes, using the Cambridge Landmarks dataset and the Tuebingen Buildings dataset. We compare the performance with existing learning-based pose regression methods PoseNet and RPNet on these two benchmark datasets.
RCPNet: Deep-Learning based Relative Camera Pose Estimation for UAVs
01.09.2020
1080279 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
ArXiv | 2024
|Camera-Based Pose Estimation for Fixed-Wing UAVs During Cooperative Landing Maneuvers
British Library Conference Proceedings | 2022
|