We study the problem of detecting object grasp from an RGB-D image in cluttered scenes. In particular, we focus on grasping household objects by a two-finger robotic arm. Recent advances in this literature have made remarkable progress, thanks to the collection of large-scale object grasp datasets. However, due to the large shape variations of the real-world objects, existing approaches show weak capabilities on handling novel objects that have never been trained with. In this paper, to alleviate this problem, we propose a novel Domain Adaptation Grasp Network (DAGNet) to detect grasping poses for novel objects. The core of our method is a network training scheme that could efficiently transfer the grasp knowledge from known objects to novel ones. To demonstrate the effectiveness, we test the performance on both real and virtual robotic arm grasping scenarios. Experiments show that, compared with existing methods, the proposed DAGNet achieves better performance on grasping novel objects.
Domain Adaptation Grasp Network for Novel Object Grasp Detection
Lect. Notes Electrical Eng.
International Conference on Autonomous Unmanned Systems ; 2021 ; Changsha, China September 24, 2021 - September 26, 2021
Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021) ; Chapter : 294 ; 3000-3009
2022-03-18
10 pages
Article/Chapter (Book)
Electronic Resource
English
British Library Online Contents | 1999
Elsevier | 1989
|NTRS | 2012
|