3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model

Chengxi Li, Brent Harrison

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

In this paper, we build a multi-style generative model for stylish image captioning which uses multimodality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITYCAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc1 . A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.

Original languageEnglish
JournalProceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS
Volume34
DOIs
StatePublished - 2021
Event34th International Florida Artificial Intelligence Research Society Conference, FLAIRS-34 2021 - North Miami Beach, United States
Duration: May 16 2021May 19 2021

Bibliographical note

Publisher Copyright:
© 2021by the authors. All rights reserved.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software

Fingerprint

Dive into the research topics of '3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model'. Together they form a unique fingerprint.

Cite this