Multimodal emotion recognition (MER), which relies on its role in processing and analysing comments posted on social media and identifying the corresponding target emotion states, has a very important position in education, social media and especially in the field of HCI. In order to solve the problem of emotion recognition for video, text and other forms of data in social media, we propose a Transformer with feature. The Transformer is able to take into account the combination of different feature relations and extract global keys, so it is introduced for long-range sequential context extraction, and the residual connections are placed after the Transformer to prevent information collapse. In addition, this paper uses a multi-headed attention mechanism to combine the extracted different feature laws, and finally the fused feature vectors are fed into the classifier to classify the sentiment and output it to achieve multimodal sentiment recognition in a multimodal context.
Multimodal Sentiment Recognition Based on Sentiment Fusion Transformer
2023-10-11
2678047 byte
Conference paper
Electronic Resource
English
A survey of multimodal sentiment analysis
British Library Online Contents | 2017
|Multimodal Fusion-based Swin Transformer for Facial Recognition Micro-Expression Recognition
British Library Conference Proceedings | 2022
|Guest editorial: Multimodal sentiment analysis and mining in the wild
British Library Online Contents | 2017
|Sensitive Information Recognition Based on Short Text Sentiment Analysis
British Library Online Contents | 2016
|