Multi-adversarial variational autoencoder nets for simultaneous image generation and classification

Abdullah Al Zubaer Imran, Demetri Terzopoulos

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

1 Scopus citations


Discriminative deep-learning models are often reliant on copious labeled training data. By contrast, from relatively small corpora of training data, deep generative models can learn to generate realistic images approximating real-world distributions. In particular, the proper training of Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs) enables them to perform semi-supervised image classification. Combining the power of these two models, we introduce Multi-Adversarial Variational autoEncoder Networks (MAVENs), a novel deep generative model that incorporates an ensemble of discriminators in a VAE-GAN network in order to perform simultaneous adversarial learning and variational inference. We apply MAVENs to the generation of synthetic images and propose a new distribution measure to quantify the quality of these images. Our experimental results with only 10% labeled training data from the computer vision and medical imaging domains demonstrate performance competitive to state-of-the-art semi-supervised models in simultaneous image generation and classification tasks.

Original languageEnglish
Title of host publicationAdvances in Intelligent Systems and Computing
Number of pages23
StatePublished - 2021

Publication series

NameAdvances in Intelligent Systems and Computing
ISSN (Print)2194-5357
ISSN (Electronic)2194-5365

Bibliographical note

Publisher Copyright:
© 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science (all)


Dive into the research topics of 'Multi-adversarial variational autoencoder nets for simultaneous image generation and classification'. Together they form a unique fingerprint.

Cite this