Human-robot collaboration is becoming increasingly important especially in the context of rehabilitation robotics where people use robots to regain autonomy. For this purpose, a variety of approaches to control these systems has been developed. A highly intuitive approach is head-motion based control, which enables precise mapping of 3D control commands onto a system via deliberate head movements. This thesis presents a system to ensure the necessary robustness and adaptivity for the control of a robotic system by means of head motion. For that purpose, a lightweight, infrastructureless sensor system was developed that can be worn on the head to fully control a robotic system in all degrees of freedom in Cartesian space. The system is modular in design and data fusion scheme to grant as much adaptivity as possible. The core of the sensor system consists of a Magnetic, Angular Rate, and Gravity (MARG) sensors, which are used to determine the orientation of an object in 3D space. The orientation computation is based on the numerical integration of angular rate measurements from a three-axis gyroscope. Unfortunately Micro-Electromechanical Systems (MEMS) gyroscopes are subject to noise terms that degrade the orientation estimation. To counteract this, MARG sensors are equipped with global reference measurement sensors: an accelerometer and magnetometer. The accelerometer is used to correct orientation in the plane perpendicular to gravity, while the magnetometer is used as an electronic compass to correct the remaining axis. This arrangement enables a globally referenced orientation computation. However, magnetometers are subject to interference, which can completely invalidate its use as a reference measurement. To increase robustness against such disturbances, a data fusion process has been developed which compensates short-term disturbances and allows for simple incorporation of additional references for error correction without further effort. On this basis, a novel approach was developed that uses the physiological coupling of a human’s eyes and head rotation to support the MARG sensor’s orientation determination during long-term magnetic field perturbations. Experimental data demonstrates that this method provides an error reduction of up to 50 percent. The usage of an eye tracker logically opens up the use of visual methods for orientation determination. Therefore, within this thesis an open-source visual Simultaneous Localization And Mapping (SLAM) for RGB-D cameras is integrated into the data fusion process to enable a robust calculation of the head pose in space. The data fusion process is designed to dynamically switch between magnetic, inertial, eye tracking-based and visual reference technologies to enable robust orientation estimation under various perturbations, e.g. gyroscope bias, magnetic disturbances and visual sensor data failure. The combination of these sensors and methods provides the capability, in addition to sensing head rotation only, of precise eye or head gaze vector control to perform accurate positioning of a robot’s End Effector (EEF) in Cartesian space. The work is finalized with a functional verification of the system in a human-robot workplace, which indicates that the sensor system and methods enable a precise control mechanism for robot teleoperation.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Multimodal sensor data fusion methods for infrastructureless head-worn interfaces - Sensor systems for robust and adaptive human-robot collaboration



    Erscheinungsdatum :

    2022-01-31



    Medientyp :

    Hochschulschrift


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    620 / 629




    Multimodal sensor-based human-robot collaboration in assembly tasks

    Male, James / Al, Gorkem Anil / Shabani, Arya et al. | BASE | 2022

    Freier Zugriff


    Infrastructureless Inter-Vehicular Real-Time Route Guidance

    Hawas, Yaser E. / Napenas, Marc Joseph B. | IEEE | 2008


    Multimodal Human-Robot Collaboration in Assembly

    Liu, Sichao | BASE | 2022

    Freier Zugriff

    Virtual lifeline: Multimodal sensor data fusion for robust navigation in unknown environments

    Widyawan, Widyawan / Pirkl, Gerald / Munaretto, Daniele et al. | BASE | 2012

    Freier Zugriff