Humans observe and infer things in a disentanglement way. Instead of remembering all pixel by pixel, learn things with factors like shape, scale, colour etc. Robot task learning is an open problem in the field of robotics. The task planning in the robot workspace with many constraints makes it even more challenging. In this work, a disentanglement learning of robot tasks with Convolutional Variational Autoencoder is learned, effectively capturing the underlying variations in the data. A robot dataset for disentanglement evaluation is generated with the Selective Compliance Assembly Robot Arm. The disentanglement score of the proposed model is increased to 0.206 with a robot path position accuracy of 0.055, while the state-of-the-art model (VAE) score was 0.015, and the corresponding path position accuracy is 0.053. The proposed algorithm is developed in Python and validated on the simulated robot model in Gazebo interfaced with Robot Operating System.


    Access

    Download


    Export, share and cite



    Title :

    Task level disentanglement learning in robotics using βVAE


    Contributors:
    M S, Midhun (author) / Kurian, James (author)

    Publication date :

    2022-01-01


    Remarks:

    International journal of electrical and computer engineering systems ; ISSN 1847-6996 (Print) ; ISSN 1847-7003 (Online) ; Volume 13 ; Issue 7


    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    COMMUTATIVE LIE GROUP VAE FOR DISENTANGLEMENT LEARNING

    ZHU, XINQI / XU, CHANG / TAO, DACHENG | TIBKAT | 2022


    Task Understanding in Robotics

    Mayeda, H. | British Library Online Contents | 1993


    Feature Disentanglement of Robot Trajectories

    Valdenegro-Toro, Matias / Harnack, Daniel / Wöhrle, Hendrik | ArXiv | 2021

    Free access

    Task Modelling in Collective Robotics

    Kube, C. R. / Zhang, H. | British Library Online Contents | 1997


    Multi-level robotics automation

    AGARWAL RUPESH / KUMAR GAURAV / THAKKAR JAI et al. | European Patent Office | 2019

    Free access