This paper emphasizes the importance of a robot's ability to refer to its task history, especially when it executes a series of pick-and-place manipulations by following language instructions given one by one. The advantage of referring to the manipulation history can be categorized into two folds: (1) the language instructions omitting details but using expressions referring to the past can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred. For this, we introduce a history-dependent manipulation task which objective is to visually ground a series of language instructions for proper pick-and-place manipulations by referring to the past. We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN. Our dataset and code are publicly available on the project website: https://sites.google.com/view/history-dependent-manipulation.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Visually Grounding Language Instruction for History-Dependent Manipulation


    Beteiligte:
    Ahn, Hyemin (Autor:in) / Kwon, Obin (Autor:in) / Kim, Kyungdo (Autor:in) / Jeong, Jaeyeon (Autor:in) / Jun, Howoong (Autor:in) / Lee, Hongjung (Autor:in) / Lee, Dongheui (Autor:in) / Oh, Songhwai (Autor:in)

    Kongress:

    2022 ; Philadelphia (PA), USA



    Erscheinungsdatum :

    01.05.2022



    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Autonomously learning to visually detect where manipulation will succeed

    Nguyen, H. | British Library Online Contents | 2014



    An Approach to Automated Instruction Generation with Grounding Using LLMs and RAG

    Holvoet, Laura / van Bekkum, Michael / de Vries, Aijse | Springer Verlag | 2025

    Freier Zugriff

    Multi-Level Query Interaction for Temporal Language Grounding

    Tang, Haoyu / Zhu, Jihua / Wang, Lin et al. | IEEE | 2022


    Boost Tracking by Natural Language With Prompt-Guided Grounding

    Li, Hengyou / Liu, Xinyan / Li, Guorong et al. | IEEE | 2025