With the advent of collaborative manipulators, the community is pushing the limits of human-robot interaction with novel control, planning, and task allocation strategies. For a purposeful interaction, however, the robot is also required to understand and predict the action of the human not only at a kinematic level (i.e. motion estimation), but also at an higher level of abstraction (i.e. action recognition), ideally from the human own perspective. Dealing with egocentric videos comes with the benefit that the data source already embeds an intrinsic attention mechanism, driven by the focus of the user. However, the deployment of such technology in realistic use-cases cannot ignore the large variability of background characteristics when changing environment, resulting in a domain shift in features space not learnable from labels at training time. In this paper, we discuss a method to perform Domain Adaptation with no external supervision, which we test on the EPIC-Kitchens-100 UDA Challenge in Action Recognition. More specifically, we move from our previous work on Relative Norm Alignment and extend the approach to unlabelled target data, enabling a simpler adaptation of the model to the target distribution in an unsupervised fashion. To this purpose, we enhanced our framework with multi-level adversarial alignment and with a set of losses aimed at reducing the classifier’s uncertainty on the target data. Extensive experiments demonstrate how our approach is capable to perform Multi-Source Multi-Target Domain Adaptation, thus minimising both temporal (i.e. different recording times) and environmental (i.e. different kitchens) biases.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Toward Human-Robot Cooperation: Unsupervised Domain Adaptation for Egocentric Action Recognition


    Additional title:

    Springer Proceedings in Advanced Robotics



    Conference:

    International Workshop on Human-Friendly Robotics ; 2022 ; Delft, The Netherlands September 22, 2022 - September 23, 2022



    Publication date :

    2023-01-02


    Size :

    15 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Toward Multimodal Human-Robot Cooperation and Collaboration

    Perzanowski, Dennis / Brock, Derek / Bugajska, Magdalena et al. | AIAA | 2004


    Unsupervised Hyperbolic Action Recognition

    Castro-Vargas, John-Alejandro / Garcia-Garcia, Alberto / Martinez-Gonzalez, Pablo et al. | Springer Verlag | 2022


    Unsupervised Evaluation of Lidar Domain Adaptation

    Hubschneider, Christian / Roesler, Simon / Zollner, J. Marius | IEEE | 2020


    Learning Kernels for Unsupervised Domain Adaptation with Applications to Visual Object Recognition

    Gong, B. / Grauman, K. / Sha, F. | British Library Online Contents | 2014


    Gesteme-free context-aware adaptation of robot behavior in human–robot cooperation

    NESSI, FEDERICO / BERETTA, ELISA / FERRIGNO, GIANCARLO et al. | BASE | 2016

    Free access