This project intends to study a multimodal emotion recognition method. This project aims to provide a theoretical basis for the problem of emotional feedback in the process of mental health education, which will help teachers and counselors to monitor students’ emotional status in real time and provide effective guidance. Multimodal information such as facial expressions, voice and text can be used to accurately identify emotions and accurately feedback on mental health. This project intends to use multi-source information fusion technology to extract and model emotion features and improve the accuracy and robustness of emotion recognition. The fusion of multi-level feature information improves the ability to recognize complex emotions. Simulation experiments show that the accuracy of the algorithm in emotion recognition is 89.7%, which is about 12.5% higher than the traditional emotion recognition algorithm based on a single mode. In addition, this method is balanced in the classification of emotion types, especially in the recognition of negative emotions, and can be used as a means of emotion monitoring and intervention.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Analysis of Emotion Recognition Model Based on Multimodal Deep Neural Network Algorithm


    Contributors:
    Ying, Qian (author)


    Publication date :

    2024-10-23


    Size :

    575592 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition

    Nguyen, Dung / Nguyen, Kien / Sridharan, Sridha et al. | British Library Online Contents | 2018


    Artificial Intelligence based Facial Emotion Recognition with Deep Neural GAN Augmentation

    Menaka, D. / Priya, D. Karthika / Lenus, C. Reeda et al. | IEEE | 2024




    Multimodal Emotion Recognition and Intention Understanding in Human-Robot Interaction

    Chen, Luefeng / Liu, Zhentao / Wu, Min et al. | Springer Verlag | 2021