During the past decade, aerial robots have seen an unprecedented expansion in their utility as they take on more tasks which had typically been reserved for humans. With an ever widening domain of aerial robotic applications, including many mission critical tasks such as disaster response operations, search and rescue missions and infrastructure inspections taking place in GPS-denied environments, the need for reliable autonomous operation of aerial robots has become crucial. To accomplish their tasks, aerial robots operating in GPS-denied areas rely on a multitude of sensors to localize and navigate. Visible spectrum camera systems correspond to the most commonly used sensing modality due to their low cost and weight rendering them suitable for small aerial robots in indoor or broadly GPS-denied settings. However, in environments that are visually-degraded such as in conditions of poor illumination, low texture, or presence of obscurants including fog, smoke and dust, the reliability of visible light cameras deteriorates significantly. Nevertheless, maintaining reliable robot navigation in such conditions is essential if the robot is to perform many of the critical applications listed above. In contrast to visible light cameras, thermal cameras offer visibility in the infrared spectrum and can be used in a complementary manner with visible spectrum cameras for robot localization and navigation tasks, without paying the significant weight and power penalty typically associated with carrying other sensors such as 3D LiDARs or a RADAR. Exploiting this fact, in this work we present a multi-sensor fusion algorithm for reliable odometry estimation in GPS-denied and degraded visual environments. The proposed method utilizes information from both the visible and thermal spectra for landmark selection and prioritizes feature extraction from informative image regions based on a metric over spatial entropy. Furthermore, inertial sensing cues are integrated to improve the robustness of the odometry estimation process. The proposed method works in real-time, fully on-board an aerial robot. To verify our solution, a set of challenging experiments were conducted inside a) an obscurant-filed machine shop-like industrial environment, as well as b) a dark subterranean mine in the presence of heavy airborne dust.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Visual-Thermal Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments


    Contributors:


    Publication date :

    2019-03-01


    Size :

    3128996 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    NAVIGATION USING SELECTED VISUAL LANDMARKS

    KERZNER DANIEL TODD / SEYFI AHMAD / MEYER TIMON et al. | European Patent Office | 2021

    Free access

    Navigation using selected visual landmarks

    KERZNER DANIEL TODD / SEYFI AHMAD / MEYER TIMON et al. | European Patent Office | 2023

    Free access

    NAVIGATION USING SELECTED VISUAL LANDMARKS

    KERZNER DANIEL TODD / REZVANI BABAK / TOURNIER GLENN et al. | European Patent Office | 2023

    Free access

    Autonomous aerial navigation using monocular visual‐inertial fusion

    Lin, Yi / Gao, Fei / Qin, Tong et al. | British Library Online Contents | 2018


    Utilizing semantic visual landmarks for precise vehicle navigation

    Murali, Varun / Chiu, Han-Pang / Samarasekera, Supun et al. | IEEE | 2017