At what spatial scale are object categories represented in ventral temporal cortex?
How is the brain organized to efficiently process visual input?
While decades of research have explored the arrangement of visually-responsive neurons in the brain,
fundamental questions remain regarding the utility of particular layouts of neurons,
the development of the visual system, and the representation of visual information in populations of neurons.
To address these questions, we combine fine-scale measurements of higher visual cortex with a biologically-inspired modeling approach.
In collaboration with Prof. Kendrick Kay at the University of Minnesota,
we study the responses of higher visual cortex to images of object categories (such as faces and places)
measured at the ultra-high resolution of 0.8mm per side of each voxel. When compared to standard resolution data (2.4mm voxels),
we can better understand the scale at which information about object categories is represented in the brain.
These data can inform and constrain models of visual cortex: by better understanding the
representation of visual information in the human brain, we can more easily differentiate models that accurately describe the visual system..