Image captioning generates a semantic description for the images, and with the development of deep learning, it usually combines computer vision and natural language processing. Image captioning needs not only recognize the important objects, attributes and the spatial relationships with the surrounding objects in the image, but also generate text descriptions that correspond to the language rules of people. In this paper, we proposed a image captioning model based on transformer. In the image understanding part, VGG16 was used to extract image information, and transformer encoder was used to extract relation from different image regions. The text generation extracts relations of word features in the description, and calculates the correlation between text and images from a variety of perspectives. The experimental results with indices BLEU4, METEOR, ROUGE, and CIDEr on the RSICD dataset are 0.29, 0.34, 0.61, and 2.53, respectively. These results are competitive and even better than the SOTA results. It is seen that show that transformer can alleviate overfitting on small datasets, accelerate the training process, and be generalized better.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Remote Sensing Image Captioning Using Transformer


    Additional title:

    Lect. Notes Electrical Eng.


    Contributors:
    Wu, Meiping (editor) / Niu, Yifeng (editor) / Gu, Mancang (editor) / Cheng, Jin (editor) / Wang, Binze (author) / Xi, Jiangbo (author) / Wang, Xingrun (author) / Fang, Jianwu (author) / Jiang, Wandong (author) / Xie, Dashuai (author)

    Conference:

    International Conference on Autonomous Unmanned Systems ; 2021 ; Changsha, China September 24, 2021 - September 26, 2021



    Publication date :

    2022-03-18


    Size :

    10 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Remote Sensing Image Captioning Using Transformer

    Wang, Binze / Xi, Jiangbo / Wang, Xingrun et al. | British Library Conference Proceedings | 2022


    Enhanced Dense Image Captioning Based On Transformers

    Goswami, Tilottama / Potu, Sathvika / Reddy, Kuntluri Prasanna et al. | IEEE | 2024


    Traffic Scenario Understanding and Video Captioning via Guidance Attention Captioning Network

    Liu, Chunsheng / Zhang, Xiao / Chang, Faliang et al. | IEEE | 2024


    A Comparative Study on Optimizers for Automatic Image Captioning

    Thavaraj A, Eliyah Immanuel / Juliet, Sujitha / J, Anila Sharon | IEEE | 2022


    Military Image Captioning for Low-Altitude UAV or UGV Perspectives

    Lizhi Pan / Chengtian Song / Xiaozheng Gan et al. | DOAJ | 2024

    Free access