Universal multimode background subtraction

Research output: Contribution to journalArticlepeer-review

114 Scopus citations

Abstract

In this paper, we present a complete change detection system named multimode background subtraction. The universal nature of system allows it to robustly handle multitude of challenges associated with video change detection, such as illumination changes, dynamic background, camera jitter, and moving camera. The system comprises multiple innovative mechanisms in background modeling, model update, pixel classification, and the use of multiple color spaces. The system first creates multiple background models of the scene followed by an initial foreground/background probability estimation for each pixel. Next, the image pixels are merged together to form megapixels, which are used to spatially denoise the initial probability estimates to generate binary masks for both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined to separate foreground pixels from the background. Comprehensive evaluation of the proposed approach on publicly available test sequences from the CDnet and the ESI data sets shows superiority in the performance of our system over other state-of-the-art algorithms.

Original languageEnglish
Article number7904604
Pages (from-to)3249-3260
Number of pages12
JournalIEEE Transactions on Image Processing
Volume26
Issue number7
DOIs
StatePublished - Jul 2017

Bibliographical note

Funding Information:
This work was supported by the National Science Foundation under Grant 1237134.

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Background model bank
  • Background subtraction
  • Binary classifiers
  • Change detection
  • Color spaces
  • Computer vision
  • Foreground segmentation
  • Pixel classification

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Universal multimode background subtraction'. Together they form a unique fingerprint.

Cite this