269 | Unmasking visual perception: neural-like representations emerge in artificial neural networks optimized for Bayesian probabilistic inference.

Theoretical and Computational Neuroscience

Author: Josefina Catoni | Email: jcatoni@sinc.unl.edu.ar


Josefina Catoni , Enzo Ferrante , Diego Milone , Rodrigo Echeveste

1° Research institute for signals, systems and computational intelligence, sinc(i), CONICET-UNL

The Bayesian theory of visual perception assumes that, given a stimulus, the brain performs probabilistic inference to estimate probabilistic distributions over unobservable variables. This process involves combining sensory information with previous expectations captured by a prior distribution. To understand how this process might occur in the cortex, we train artificial neural networks for a perceptual task: performing Bayesian inference in the context of natural images. In this case, we train Variational Autoencoders, which simultaneously learn a generative model of image patches alongside the corresponding inference model. We show that, under the requirement of optimal inference and using sparse activations, representations similar to those observed in the visual cortex emerge within the network. Notably, when an explicit contrast variable is included in the model, the network is able to not only correctly represent mean estimates about these unobservable variables but also the level of remaining uncertainty after the observation.