Camera relocalization is promising for robot navigation; however, the lack of sufficient reference images and challenging shifts of conditions limit its further application. This paper proposes a learning-based three-step pipeline that applies spatial information features for camera relocalization problems. We first introduce a frustum intersection over union (IoU) to represent the image pair's spatial similarity. This representation is used to train an image retrieval model to find nearest neighbor candidates for query images. A spatial sample consensus (SPASAC) operation is then deployed to filter the outliers in the candidates. Afterward, a relative camera pose regressor is used with the valid candidates to predict every query image's absolute pose. Besides, we introduce two implementations of image-to-image translation networks for camera relocalization to increase the number of synthetic reference images and challenging night-to-day localization performance. Experiments show that our method can estimate camera poses across different domains and outperforms related methods in four benchmarks. Specifically, the experiments on the Tuebingen Buildings dataset demonstrate the robustness of our approach when localizing UAV-captured images with highspeed movement and large viewpoint variation.
Learning-based Camera Relocalization with Domain Adaptation via Image-to-Image Translation
2021-06-15
5172187 byte
Conference paper
Electronic Resource
English
Federated learning based multi-domain image-to-image translation
British Library Conference Proceedings | 2022
|Lane markings-based relocalization on highway
IEEE | 2019
|Camera peripheral image output system with camera image adjustment
European Patent Office | 2019
|British Library Online Contents | 2014
|DOMAIN ADAPTATION FOR IMAGE CLASSIFICATION USING CLASS PRIOR PROBABILITY
European Patent Office | 2016
|