The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Planning Collision-Free Robot Motions in a Human–Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking



    Erscheinungsdatum :

    2022-01-01



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629






    LOW-LEVEL SENSOR FUSION-BASED HUMAN TRACKING FOR MOBILE ROBOT

    Ristić-Durrant, Danijela / Gao, Ge / Leu, Adrian | BASE | 2016

    Freier Zugriff

    Low-level sensor fusion-based human tracking for mobile robot

    Ristić-Durrant, Danijela / Gao, Ge / Leu, Adrian | BASE | 2016

    Freier Zugriff