The images were captured by a fisheye camera and a magnetic compass was used to acquire the orientation data. The datasets are split in two folders: 1) LEARN: In order to learn a new place, the robot camera captures 15 images over a 360 degrees panorama. During this process, the robot stays still in order to avoid distortions in the representation of the place. 2) EXPLO: When exploring the environment (i.e. the rest of the time), the robot only captures 7 images per panorama, for the purpose of faster place recognition. Images are captured while the robot is moving. Various exploration panoramas are recorded around the trajectory performed in the learning panoramas (see traj.pdf). The average distance between two learning panoramas is 0.93 +/- 0.03 meters The average distance traveled during an exploration panoramas is 0.71 +/- 0.01 meters DATASET A --- - 20 meters long - 22 learning panoramas (i.e. sets of 15 images captured while robot is stopped) - 5 exploration trajectories - A_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving) - A_parallel: 29 exploration panoramas - A_diagonal1: 28 exploration panoramas - A_diagonal2: 30 exploration panoramas - A_diagonal3: 29 exploration panoramas DATASET B --- - 20 meters long - 21 learning panoramas (i.e. sets of 15 images captured while robot is stopped) - 4 exploration trajectories - B_on_learned: 29 exploration panoramas (i.e. sets of 7 images captured while robot is moving) - B_parallel: 29 exploration panoramas - B_diagonal1: 29 exploration panoramas - B_diagonal2: 29 exploration panoramas DATASET C --- - 23.1 meters long - 25 learning panoramas (i.e. sets of 15 images captured while robot is stopped) - 2 exploration trajectories - C_on_learned: 34 exploration panoramas (i.e. sets of 7 images captured while robot is moving) - C_parallel: 34 exploration panoramas PANO_INFO FILE STRUCTURE --- Every folder containing images also contains an info file, named either learn_pano_info.SAVE or explo_pano_info.SAVE. Each line corresponds to an image. The structures is the following: - column 1: id = image_id + 1 - column 2: azimuth of the center of the image in degrees/360 (value in [0,1]) - column 3: elevation of the center of the image. irrelevant in this database (equal to 0). - column 4: type of panorama: equal to 1 if learning and to 0 if exploration. - column 5: end of panorama: equal to 1 if it corresponds to the last image of a panorama. REFERENCES --- The dataset was used in the paper: Belkaid, M., Cuperlier, N., and Gaussier, P. Combining local and global visual information in context-based neurorobotic navigation. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 4947-4954, doi:10.1109/IJCNN.2016.7727851, 2016.


    Access

    Download


    Export, share and cite



    Title :

    A dataset for robotic outdoor visual navigation with multiple passages through trajectory segments


    Contributors:

    Publication date :

    2016-12-06



    Type of media :

    Research Data


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629




    Visual Routines for Outdoor Navigation

    Campani, M. / Straforini, M. / Cappello, M. et al. | British Library Conference Proceedings | 1993


    Visual routines for outdoor navigation

    Campani, M. / Cappello, M. / Piccioli, G. et al. | Tema Archive | 1993


    AURYON. Aerial Unmanned Robotic e Ye with Outdoor Navigation

    Vidolov, Boris / Miras, Jerome de / Bonnet, Stephane | Tema Archive | 2008


    TAPAS: A Robotic Platform for Autonomous Navigation in Outdoor Environments

    Bondyra, Adam / Nowicki, Michał / Wietrzykowski, Jan | Springer Verlag | 2015


    Video-Trajectory Robot Dataset

    Mavsar, Matija | BASE | 2022

    Free access