The paper introduces a pioneering work that explores the fusion of computer vision and natural language processing for narrative generation. We propose an innovative methodology that combines the GRiT model for dense captioning with the GPT model for story generation. GRiT extracts detailed object descriptions from images, while GPT constructs cohesive storylines based on these descriptions. The integrated approach aims to generate narratives with visual and textual information. Through experimental validation and qualitative analysis, we demonstrate the effectiveness of our method in creating engaging stories from visual content. Our paper advances AI-driven narrative generation and opens avenues for applications in digital storytelling, content creation, and creative AI.
Enhanced Dense Image Captioning Based On Transformers
2024-11-06
940648 byte
Conference paper
Electronic Resource
English
Remote Sensing Image Captioning Using Transformer
British Library Conference Proceedings | 2022
|Remote Sensing Image Captioning Using Transformer
Springer Verlag | 2022
|