Integrating Flexible Normalization into Midlevel Representations of Deep Convolutional Neural Networks

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Deep convolutional neural networks (CNNs) are becoming increasingly popular models to predict neural responses in visual cortex. However, contextual effects, which are prevalent in neural processing and in perception, are not explicitly handled by current CNNs, including those used for neural prediction. In primary visual cortex, neural responses are modulated by stimuli spatially surrounding the classical receptive field in rich ways. These effects have been modeled with divisive normalization approaches, including flexible models, where spatial normalization is recruited only to the degree that responses from center and surround locations are deemed statistically dependent. We propose a flexible normalization model applied to midlevel representations of deep CNNs as a tractable way to study contextual normalization mechanisms in midlevel cortical areas. This approach captures nontrivial spatial dependencies among midlevel features in CNNs, such as those present in textures and other visual stimuli, that arise from tiling high-order features geometrically. We expect that the proposed approach can make predictions about when spatial normalization might be recruited in midlevel cortical areas. We also expect this approach to be useful as part of the CNN tool kit, therefore going beyond more restrictive fixed forms of normalization.

Original languageEnglish
Pages (from-to)2138-2176
Number of pages39
JournalNeural Computation
Volume31
Issue number11
DOIs
StatePublished - Nov 2019

Bibliographical note

Publisher Copyright:
© 2019 Massachusetts Institute of Technology.

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Integrating Flexible Normalization into Midlevel Representations of Deep Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this