Abstract
In this paper, we build a multi-style generative model for stylish image captioning which uses multimodality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITYCAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc1 . A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.
Original language | English |
---|---|
Journal | Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS |
Volume | 34 |
DOIs | |
State | Published - 2021 |
Event | 34th International Florida Artificial Intelligence Research Society Conference, FLAIRS-34 2021 - North Miami Beach, United States Duration: May 16 2021 → May 19 2021 |
Bibliographical note
Publisher Copyright:© 2021by the authors. All rights reserved.
ASJC Scopus subject areas
- Artificial Intelligence
- Software