Multimodal emotion recognition is one of the mainstream frontier directions in the field of AI, and has potential significant application value and wide application in related application scenarios involving intelligent science and human-computer interaction. Since the connection of semantic information features between contexts is a factor that needs to be focused on in multimodal emotion recognition (ER), although the CNN model can predict the information contained in the time series, it does not fully consider. The special relationship between semantic information has certain limitations. The CMN uses the semantic information that GRU can store in the memory unit. However, the practical application of this model is only to dynamically monitor the emotional changes of both parties over a period of time. There are deficiencies in the dialogue process involved. The recognition model based on variational autoencoder and multi-modal feature fusion constructed in this paper extracts rich contextual semantic information through Bi-LSTM, and uses the attention mechanism to analyze the multi-modal features extracted by multiple speakers. Features are fused to obtain important features, which effectively solves the problems mentioned above. The MFF-VAE recognition model proposed in this article effectively combines the extended feature extraction of horizontal multiple participants with the complementarity and connection of vertical semantic information, providing the next level of research direction for further improving recognition accuracy in the field of multimodal emotion recognition.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multimodal Feature Fusion and Emotion Recognition Based on Variational Autoencoder


    Contributors:
    Wang, Yuan (author) / Guan, Xinyu (author)


    Publication date :

    2023-10-11


    Size :

    2712673 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Deep Feature Consistent Variational Autoencoder

    Hou, Xianxu / Shen, Linlin / Sun, Ke et al. | ArXiv | 2016

    Free access

    Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition

    Nguyen, Dung / Nguyen, Kien / Sridharan, Sridha et al. | British Library Online Contents | 2018


    Score-Based Multimodal Autoencoder

    Wesego, Daniel / Rooshenas, Pedram | ArXiv | 2023

    Free access

    Variational Autoencoder

    Pinheiro Cinelli, Lucas / Araújo Marins, Matheus / Barros da Silva, Eduardo Antúnio et al. | Springer Verlag | 2021


    Variational Autoencoder

    Okadome, Takeshi | Springer Verlag | 2025