Localisation is deemed a challenging task in space robotics due to the numerous factors that must be considered when designing such a system. It is especially demanding inside a planetary environment due to the lack of GPS, the high likelihood of “wheel slip”, the high level of uncertainty in our knowledge of the environment, the processing power constraints when travelling long distances, and in most cases, very few distinguishable features, such as boulders, mountains, and fine-grained particles of rock. Common approaches include visual odometry, panorama image matching, orbital to perspective top-down image matching, and orbital to ground image landmark matching. Image-based solutions are very commonly used, however, determining the rover's location from image features is still an arduous task, especially compared to an Earth-application given the absence of easily recognisable features in the environment such as advertisement boards, road signs, buildings, parked cars. This paper proposes an approach to localise a rover operating on a planetary surface onto a bird-view map (orbital image data) using a monocular camera atop the rover, an Inertial Measurement Unit (IMU), and with either a known or unknown initial rover location. It attempts to provide a near real-time absolute (global) localisation solution that maximises accuracy without compromising the processing speed. We solve the problem in a novel manner as we incorporate a lightweight convolutional neural network in the Monte Carlo Localization (MCL) algorithm in order to boost accuracy and robustness to outliers while retaining processing speed. The MCL algorithm uses particle initialisation that exploits the a priori knowledge of the rover's initial position inside the environment. It provides a correct estimate by calculating the belief of the system using monocular visual-inertial odometry and performing the importance weighting step via a Siamese Neural Network. In addition, further modifications are adopted to reduce the computational complexity by adaptively reducing the number of particles as the uncertainty diminishes. Several deep learning applications in the literature utilise heavy architectures and concentrate their efforts on localising the rover in one run, requiring lots of computational power to do so. Our method concentrates on providing a constant localisation that gradually improves throughout the mission by harnessing deep learning for its powerful feature extraction but utilising a lightweight architecture to allow for a fast execution. A continuous and correct localisation inside such environment is fundamental to space exploration as it allows primarily for a safe and optimised navigation, which can lead to an increased mission return. The developed system demonstrates high robustness to small changes in the scenery whilst moving within the environment and achieves a consistent drop in positional error as it travels further.
Planetary Rover Localisation via Surface and Orbital Image Matching
2022-03-05
20635392 byte
Conference paper
Electronic Resource
English
Planetary Rover Localization Within Orbital Maps
NTRS | 2014
|ROAMS: planetary surface rover simulation environment
NTRS | 2003
|