Multimodal emotion recognition is one of the mainstream frontier directions in the field of AI, and has potential significant application value and wide application in related application scenarios involving intelligent science and human-computer interaction. Since the connection of semantic information features between contexts is a factor that needs to be focused on in multimodal emotion recognition (ER), although the CNN model can predict the information contained in the time series, it does not fully consider. The special relationship between semantic information has certain limitations. The CMN uses the semantic information that GRU can store in the memory unit. However, the practical application of this model is only to dynamically monitor the emotional changes of both parties over a period of time. There are deficiencies in the dialogue process involved. The recognition model based on variational autoencoder and multi-modal feature fusion constructed in this paper extracts rich contextual semantic information through Bi-LSTM, and uses the attention mechanism to analyze the multi-modal features extracted by multiple speakers. Features are fused to obtain important features, which effectively solves the problems mentioned above. The MFF-VAE recognition model proposed in this article effectively combines the extended feature extraction of horizontal multiple participants with the complementarity and connection of vertical semantic information, providing the next level of research direction for further improving recognition accuracy in the field of multimodal emotion recognition.
Multimodal Feature Fusion and Emotion Recognition Based on Variational Autoencoder
2023-10-11
2712673 byte
Conference paper
Electronic Resource
English
Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition
British Library Online Contents | 2018
|Springer Verlag | 2021
|Springer Verlag | 2025
|