Recent progress in computer vision has been driven by high-capacity deep convolutional neural network (CNN) models trained on generic large datasets. However, creating large datasets with dense pixel-level labels is extremely costly. In this paper, we focus on the problem of instance segmentation for robotic manipulation using rich image and depth features. To avoid intensive human labeling, we develop an automated rendering pipeline for rapidly generating labeled datasets. Given 3D object models as input, the rendering pipeline produces photorealistic images with pixel-accurate semantic label maps and depth maps. The synthetic dataset is then used to train an RGB-D segmentation model by extending the Mask R-CNN framework for depth input fusion. Our results open up new possibilities for advancing robotic perception using cheap and large-scale synthetic data.
Learning Accurate Objectness Instance Segmentation from Photorealistic Rendering for Robotic Manipulation
Springer Proceedings in Advanced Robotics
International Symposium on Experimental Robotics ; 2018 ; Buenos Aires, Argentina November 05, 2018 - November 08, 2018
Proceedings of the 2018 International Symposium on Experimental Robotics ; Chapter : 22 ; 245-255
2020-01-23
11 pages
Article/Chapter (Book)
Electronic Resource
English
Visual object tracking based on objectness measure with multiple instance learning
British Library Online Contents | 2017
|BlenderProc2: A Procedural Pipeline for Photorealistic Rendering
German Aerospace Center (DLR) | 2023
|Learning Stixel-based Instance Segmentation
IEEE | 2021
|