This dataset accompanies the following publication: Günther, M.; Ruiz-Sarmiento, J. R.; Galindo, C.; González-Jiménez, J. & Hertzberg, J. Context-Aware 3D Object Anchoring for Mobile Robots. Robot. Auton. Syst., 2018 (accepted) The dataset consists of 15 scenes inspected by a robot equipped with a RGB-D camera driving around a table and turning towards it from different locations. The table contained a number of objects in varying table settings. In total, the dataset contains 1387 seconds of observation and 144 unique objects from 9 categories: SugarPot MilkPot CoffeeJug MobilePhone Mug Dish Fork Knife Spoon TableSign Segmentation, tracking and local object recognition was run on the recorded sensor data, and its output (tracked objects and local recognition results) was added to the dataset. Since the objects were observed from multiple perspectives and tracking was lost while the robot was moving from one observation pose to another, the dataset contains more than one track ID for most objects (one for each subsequent observation of the object). Each track ID was manually labeled with the ground truth category of the object it represented. Additionally, all track IDs belonging to the same object were manually grouped together to allow evaluation of the anchoring process. Track IDs that did not correspond to any object on the table (but instead to objects on different tables, pieces of the table itself or other artifacts) were manually removed. In total, out of 432 track IDs, 410 (94.9 %) were associated with true objects, while 22 (5.1 %) were removed as artifacts. File contents All data is provided as rosbags. The naming scheme is as follows: `*-sensordata.bag.bz2`: The raw sensor data from the robot and all transform data, including localization in a map. `*-perception.bag.bz2`: The object recognition results and ground truth information for the tracked objects. `scene??-pr2-*.bag.bz2`: 5 scenes that were recorded using the PR2 robot. `scene??-calvin-*.bag.bz2`: 10 scenes that were recorded using the Calvin robot. Both robots used an ASUS Xtion Pro Live as 3D camera. `race_vision_msgs.tar.bz2`: The custom messages used in the `-perception` rosbags, as a ROS Kinetic package. Videos To get a first impression of the dataset, `scene10.mp4` and `scene19.mp4` show the corresponding scenes from the point of view of the robot's RGB camera.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Context-Aware 3D Object Anchoring for Mobile Robots Dataset



    Erscheinungsdatum :

    2018-06-01



    Medientyp :

    Forschungsdaten


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629





    Context Aware Body Regulation of Redundant Robots

    Mohammadi, Pouya | DataCite | 2020


    Toward Context-Aware, Affective, and Impactful Social Robots

    Frederiksen, Morten Roed | BASE | 2021

    Freier Zugriff

    Communication-Aware Motion Planning for Mobile Robots

    Minnema Lindhé, Magnus | BASE | 2012

    Freier Zugriff

    Mobile device context aware determinations

    ABRAMSON DAN / MUSICANT OREN / IR SEAN | Europäisches Patentamt | 2018

    Freier Zugriff