© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ; Robots that can assist in the Activities of Daily Living (ADL), such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this work, tracking of the arm is lost when the user’s hand enters the jacket which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction (HHI) study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position based data the elbow position could be predicted with a 4.1cm error but adding engineered features reduced the error to 2.4cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking. ; Peer Reviewed ; Postprint (author's final draft)


    Access

    Download


    Export, share and cite



    Title :

    "Elbows out": predictive tracking of partially occluded pose for robot-assisted dressing



    Publication date :

    2018-01-01



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629




    Partially occluded vehicle recognition and tracking in 3D

    Ohn-Bar, Eshed / Sivaraman, Sayanan / Trivedi, Mohan | IEEE | 2013


    PARTIALLY OCCLUDED VEHICLE RECOGNITION AND TRACKING IN 3D

    Ohn-Bar, E. / Sivaraman, S. / Trivedi, M. et al. | British Library Conference Proceedings | 2013


    Navigation based on partially occluded pedestrians

    BENOU ARIEL / ALONI DAVID / KIRZHNER DMITRY et al. | European Patent Office | 2023

    Free access

    CaltechFN: Distorted and Partially Occluded Digits

    Rim, Patrick / Saha, Snigdha / Rim, Marcus | British Library Conference Proceedings | 2023


    NAVIGATION BASED ON PARTIALLY OCCLUDED PEDESTRIANS

    BENOU ARIEL / ALONI DAVID / KIRZHNER DMITRY et al. | European Patent Office | 2022

    Free access