fLoc

Functional localizer experiment developed by the Stanford Vision & Perception Neuroscience Lab to define category-selective cortical regions

» download

Experiment code and stimuli are available for download on GitHub (recommended) or directly (zip archive).

» design

OVERVIEW

This package contains stimuli and presentation code for a functional localizer experiment that can be used to define category-selective cortical regions that respond preferentially to faces (e.g., fusiform face area), places (e.g., parahippocampal place area), bodies (e.g., extrastriate body area), and characters (e.g., visual word form area). We recommend collecting at least 2-3 runs of localizer data per subject to have sufficient power to define these and other regions of interest.

SPECIFICS

The localizer uses a miniblock design in which eight stimuli of the same category are presented in each 4 second trial (500 ms/image). For each 5 minute run, a novel stimulus sequence is generated that counterbalances the ordering of five stimulus domains (characters, bodies, faces, places, and objects) and a blank baseline condition. For each stimulus domain, there are two associated image categories that are presented in alternation over the course of a run but never intermixed within a trial:

  • Characters
    • Word - pronounceable English pseudowords
    • Number - uncommon whole numbers
  • Bodies
    • Body - whole bodies with cropped heads
    • Limb - isolated arms, legs, hands, and feet
  • Faces
    • Adult - portraits of adults
    • Child - portraits of children
  • Places
    • Corridor - indoor views of hallways
    • House - outdoor views of buildings
  • Objects
    • Car - motor vehicles with four wheels
    • Instrument - musical string instruments

TASK

The experimenter selects which behvioral task the subject is to perform while executing the experiment. Three options are available:

  • 1-back - detect back-to-back image repetition
  • 2-back - detect image repetition with an intervening stimulus
  • Oddball - detect replacement of a stimulus with scrambled image


The frequency of task probes (i.e., trials containing an image repetition or oddball) is equated across stimulus categories with no more than one such event per trial. By default subjects are alloted 1 second to respond to a task probe, and responses outside of this time window are counted as false alarms. Behavioral data displayed at the end of each run thus summarize the hit rate (percentage of task probes detected within the time limit) and the false alarm count (number of responses outside of task-relevant time windows) for the preceding run.

» stimuli

DOWNLOADS

The entire stimulus set included in the localizer package is available for download in various formats:
JPG | PowerPoint

EXAMPLE IMAGES

Characters
pseudoword pseudoword pseudoword pseudoword pseudoword pseudoword pseudoword pseudoword
number number number number number number number number

Bodies
body body body body body body body body
limb limb limb limb limb limb limb limb

Faces
adult adult adult adult adult adult adult adult
child child child child child child child child

Places
corridor corridor corridor corridor corridor corridor corridor corridor
house house house house house house house house

Objects
car car car car car car car car
instrument instrument instrument instrument instrument instrument instrument instrument

» instructions

SOFTWARE

The code included in this package is written in MATLAB and calls Psychtoolbox-3 functions.

SETUP

  1. Navigate to the functions directory (~/fLoc/functions/)
  2. Modify response collection functions for button box and keyboard:
    1. getBoxNumber.m - Change value of buttonBoxID to the "Product ID number" of local scanner button box (line 9)
    2. getKeyboardNumber.m - Change value of keyboardID to the "Product ID number" of native laptop keyboard (line 9)
  3. Add Psychtoolbox to MATLAB path

EXECUTION

  1. Navigate to base experiment directory in MATLAB (~/fLoc/)
  2. Execute runme.m wrapper function:
    1. Enter runme to execute 3 runs sequentially (default - entire stimulus set is used without recycling images)
    2. Enter runme(N) to execute N runs sequentially (stimuli recycled after 3 runs)
    3. If experiment is interrupted, enter runme(N,startRun) to continue execution of pre-generated sequence of runs from startRun to N
  3. Enter subject initials when prompted
  4. Select behavioral task:
    1. Enter 1 for 1-back image repetition detection task
    2. Enter 2 for 2-back image repetition detection task
    3. Enter 3 for oddball detection task
  5. Select triggering method:
    1. Enter 0 if not attempting to trigger scanner (e.g., while debugging)
    2. Enter 1 to automatically trigger scanner at onset of experiment
  6. Wait for task instructions screen to display
  7. Press g to start experiment (and trigger scanner if option is selected)
  8. Wait for behavioral performance to display after each run
  9. Press g to continue experiment and start execution of next run

DEBUGGING

  1. Press [Command + period] to halt experiment
  2. Enter sca to return to MATLAB command line
  3. Please report bugs on GitHub
» analysis

DATA STRUCTURES

Session information, behavioral data, and presentation parameters are saved as MAT files:

  • Session information and behavioral data files are written for each run and saved in the data directory (~/fLoc/data/)
  • Script and parameter files are written for each stimulus sequence and stored in the scripts directory (~/fLoc/scripts/)
  • Parameter files (.par) contain information used to generate the design matrix for a General Linear Model (GLM):
    1. Trial onset time (seconds from end of countdown)
    2. Condition number (0 = baseline)
    3. Condition name
    4. Condition plotting color (RGB values from 0 to 1)

GENERAL LINEAR MODEL

After acquiring and preprocessing functional data, a General Linear Model (GLM) is fit to the time course of each voxel to estimate Beta values of response amplitude to different stimulus categories (e.g., Worsley et al., 2002). For preprocessing we recommend performing motion correction, detrending, and transforming time series data to percent signal change without spatial smoothing.

REGIONS OF INTEREST

Category-selective regions are defined by statistically contrasting Beta values of categories belonging to a given stimulus domain vs. all other categories in each voxel and thresholding resulting maps (e.g., t-value > 4):

  • Character-selective regions
    • [word number] > [body limb child adult corridor house car instrument]
    • selective voxels typically clustered around the inferior occipital sulcus (IOS) and along the occipitotemporal sulcus (OTS)
  • Body-selective regions
    • [body limb] > [word number child adult corridor house car instrument]
    • selective voxels typically clustered around the lateral occipital sulcus (LOS), inferior temporal gyrus (ITG), and occipitotemporal sulcus (OTS)
  • Face-selective regions
    • [child adult] > [word number body limb corridor house car instrument]
    • selective voxels typically clustered around the inferior occipital gyrus (IOG), posterior fusiform gyrus (Fus), and mid-fusiform sulcus (MFS)
  • Place-selective regions
    • [corridor house] > [word number body limb child adult car instrument]
    • selective voxels typically clustered around the transverse occipital sulcus (TOS) and collateral sulcus (CoS)
  • Object-selective regions
    • [car instrument] > [word number body limb child adult corridor house]
    • selective voxels are not typically clustered in occipitotemporal cortex when contrasted against characters, bodies, faces, and places
    • object-selective regions in lateral occipital cortex can be defined in a separate experiment (contrasting objects > scrambled objects)
Lateral Occipitotemporal Cortex
(left hemisphere)
Posterior Ventral Temporal Cortex Mid Ventral Temporal Cortex

Place-selective

Body-selective

Character-selective

Face-selective

LOTC

posterior VTC mid VTC

Category-selective regions defined in three anatomical sections of occipitotemporal cortex (see Stigliani et al., 2015; see Fig. 3A) are shown above on the inflated cortical surface of a representative subject with anatomical labels overlaid on significant sulci and gyri (see reference to labels above).

» citation

ARTICLE

Stigliani, A., Weiner, K. S., & Grill-Spector, K. (2015). Temporal processing capacity in high-level visual cortex is domain specific. Journal of Neuroscience, 35(36), 12412-12424.
HTML | PDF

CONTACT

Anthony Stigliani: astiglia [at] stanford [dot] edu
Kalanit Grill-Spector: kalanit [at] stanford [dot] edu

PLEASE CITE US!

To acknowledge using our localizer or stimulus set, you might include a sentence like one of the following:

"We defined regions of interest using the fLoc functional localizer (Stigliani et al., 2015)..."
"We used stimuli included in the fLoc functional localizer package (Stigliani et al., 2015)..."