Image captioning generates a semantic description for the images, and with the development of deep learning, it usually combines computer vision and natural language processing. Image captioning needs not only recognize the important objects, attributes and the spatial relationships with the surrounding objects in the image, but also generate text descriptions that correspond to the language rules of people. In this paper, we proposed a image captioning model based on transformer. In the image understanding part, VGG16 was used to extract image information, and transformer encoder was used to extract relation from different image regions. The text generation extracts relations of word features in the description, and calculates the correlation between text and images from a variety of perspectives. The experimental results with indices BLEU4, METEOR, ROUGE, and CIDEr on the RSICD dataset are 0.29, 0.34, 0.61, and 2.53, respectively. These results are competitive and even better than the SOTA results. It is seen that show that transformer can alleviate overfitting on small datasets, accelerate the training process, and be generalized better.
Remote Sensing Image Captioning Using Transformer
Lect. Notes Electrical Eng.
International Conference on Autonomous Unmanned Systems ; 2021 ; Changsha, China September 24, 2021 - September 26, 2021
Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021) ; Chapter : 333 ; 3388-3397
2022-03-18
10 pages
Article/Chapter (Book)
Electronic Resource
English
Remote Sensing Image Captioning Using Transformer
British Library Conference Proceedings | 2022
|Enhanced Dense Image Captioning Based On Transformers
IEEE | 2024
|