Manuscript accepted for publication in Nature Neuroscience: The Neuronal Basis of Fear Generalization in Humans

Our "Neuronal Basis of Fear Generalization" manuscript has been accepted to be published in Nature Neuroscience. 

You can download the pdf here.

It has been also highlighted in Nature Reviews Neuroscience.

Effect of aversive learning on discrimination of faces

In her Msc thesis, Lea Kampermann shows that humans can perceptually discriminate faces better, when these are paired with an aversive outcome. This effect was specific to the face, which was paired with an aversive outcome and was not observed for the one which was kept neutral throughout the experiment. Furthermore the effect was strongest when these faces were presented at shorter durations (~.6 s) allowing participants to make no more than two fixations per trial.

Her thesis contains also a detailed account on the methodology for generating face-stimuli that are perceptually calibrated to form a two-dimensional similarity gradient with equal perceptual steps between faces. The methodology is an extension of work from Yue et al. (Vision Research, 2012). If you wish to use these stimuli for your experiment they are available upon request.

Perceptually calibrated set of faces according to a simple primary visual cortex forming a circular similarity gradient. Details on their production can be read in Msc Thesis of Lea (please contact any of us for a pdf) .

Categorical, yet graded--single-image activation profiles of human category-selective cortical regions.

Mur M et al. investigated the selectivity of activity levels in parahippocampal place area (PPA) and fusiform face area (FFA) evoked by single images. They focus here only on the average BOLD activity within carefully selected ROIs.

The paper is very creative in terms of new analyses methods, relies heavily on rank orders and hypothesis testing with bootstrapping.

First it establishes the fact that PPA and FFA behaves as expected, that is face stimuli for FFA and place stimuli for PPA rank highest in terms of evoked activity. Overall PPA responses are more selective than FFA responses, reaching AUC values of 1 in both hemispheres. This results from the fact that faces evokes really high activity levels in the FFA, whereas, in the case of PPA inactivation by faces contribute to the PPA selectivity.

The rest of the report focuses on characterizing the category selectivity of these areas.

If an area is category selective in an ideal sense, non-preferred stimuli should never evoke higher activity levels than any other preferred stimulus, and if so, then only by chance due to noise.

The number of inverted pairs measures exactly the number of times one could identify violation of this rule by counting the number of times a stimulus from outside the category is ranked higher than a stimulus from within category. If these inverted pairs survive across multiple sessions (as measured by PRIP metric), this would be an evidence against ideal category selectivity. However as such, PRIP is not a very sensitive metric. For example one single preferred stimulus failing by chance to evoke any activity at all would be sufficient to generate very many inverted pairs, thus the metric seems to fluctuate highly non-linearly with respect to distance between inverted pairs. Therefore the authors, used the first sessions to identify preferred-nonpreferred pairs with largest activity difference, with the idea that an inversion with the largest activity difference would be the observation with least chance level. If these pairs survive across sessions, the difference in activity would then decrease only marginally and remain positive, thus providing evidence for stable inversions (as such, however this measure is also influenced by the noise on both the preferred and non-preferred stimulus). These analyses provide supporting evidence that FFA and PPA behave like an ideal category selective area, with the exception of left FFA, in line with the fact that left FFA is the ROI where smallest AUC values were observed (only though at ROI size of 128 voxels).