Receptive field modeling of the neural mechanisms of face perception and attention
The ventral visual cortex is considered the end stage of the what pathway for visual recognition in the human brain.
Representations at this stage of processing are typically considered to be abstracted from and invariant
to spatial transformations. However, retinotopic biases and position information have been
demonstrated in several high-level VTC regions,
providing an empirical challenge to classic theories of cortical organization
and high-level visual processing.
It is not currently understood if and how these spatial
representations function to enable or constrain visual perception and recognition.
That is, is spatial coding in the VTC a passive, concomitant property of high-level visual processing,
or does it play a functional role in the way we recognize objects?.
This project uses functional magnetic resonance imaging (fMRI)
of the human face network, behavioral measurements of face recognition,
and cutting-edge deep learning methods to characterize spatial representations
in the ventral temporal cortex (VTC). We focus on the face-selective network in the human brain,
an ideal model system for deriving links between properties of the brain and behavior.
We employ the population receptive field (pRF) model, which is a computational model
characterizing the region in space that drives activity in each voxel in the brain;
it is a generative model, which enables making quantitative predictions of neural responses
from the properties of any viewed stimulus. We can then evaluate the functional utility of spatial
integration in recognition using an updated hierarchical convolutional neural net
(HCNN, a biologically-inspired model of the ventral “what” pathway) based on our fMRI measurements.
Through this, we seek to build a comprehensive, computational framework for understanding how
spatial computations across the ventral “what” pathway enable and constrain face recognition behavior.