Currently, social media has become an important means for people to share their daily lives and express emotions. Influenced by the overall environment, individuals are no longer limited to single-text content, but are increasingly inclined towards conveying information through various modalities. However, existing multimodal sentiment analysis methods face several challenges.Firstly, most current methods for sentiment polarity judgment primarily rely on textual content. These models struggle to effectively associate common emotional features between text and images. For instance, regardless of the textual content inputted into the model, the resulting image features remain the same for a given image. Thus, the models fail to extract relevant features from the text to aid in better sentiment analysis.Secondly, social media data commonly exhibit inconsistencies between textual content and image descriptions, making it difficult to obtain visually sensitive textual representations.Thirdly, most existing image feature extraction methods employ ResNet (Deep residual network) to directly fuse image features with text, lacking the ability to adaptively select and focus on key information from inputted image data.To address the aforementioned challenges in multimodal sentiment analysis, the following improvements have been made:1.Drawing on the target-oriented mBERT (TomBERT) model, which focuses on multimodal aspect-level sentiment analysis, we have designed a text-image matching module that is tailored to the experimental needs of this study. This module serves as the primary component applied in multimodal sentiment analysis.2.We have incorporated the Image Captioning with Transformers (CATR) module to facilitate the conversion of images into textual representations. This enriches the information available from the text data and resolves the issue of text-content mismatch with image descriptions.3.The improved Convolutional Block Attention Module (CBAM) was integrated to introduce the capability of adaptive selection and concentration on essential image features.In order to address the aforementioned issues, a model named Image Captioning Joint Attention Mechanism (ICAM) was designed.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multimodal Sentiment Analysis Based on Image Captioning and Attention Mechanism


    Contributors:
    Sun, Ye (author) / Jin, Guozhe (author) / Zhao, Yahui (author) / Cui, Rongyi (author)


    Publication date :

    2023-10-11


    Size :

    2678341 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Traffic Scenario Understanding and Video Captioning via Guidance Attention Captioning Network

    Liu, Chunsheng / Zhang, Xiao / Chang, Faliang et al. | IEEE | 2024




    A survey of multimodal sentiment analysis

    Soleymani, Mohammad / Garcia, David / Jou, Brendan et al. | British Library Online Contents | 2017


    Hierarchical & multimodal video captioning: Discovering and transferring multimodal knowledge for vision to language

    Liu, An-An / Xu, Ning / Wong, Yongkang et al. | British Library Online Contents | 2017