Great Night at the Night of Science in Hamburg 2017

During the Night of Science event in Hamburg, we (me, Lea Kampermann and Lukas Neugebauer) introduced the eye-tracking technique to our guests, and illustrated it with the classical change blindness experiment.

We explained the basics of the eye-tracking and illustrated it with a classical experiment in visual neurosciences, namely the phenomenon of change blindness. We received about 60 people, and recorded eye-movements from 10 volunteers. Below I prepared an animated GIF that shows both the images shown to volunteers and the location that are most fixated by one volunteers. Overall our efforts were rewarded as being the highest rated demo during the night in our department.

Nine images shown to ten volunteers during the Night of Science event in Hamburg. The animation above consists of 3 different images.  The first two consist of flickering images presented during the experiment to induce change blindness; and the last one is the semi-transparent fixation map showing locations that were most attended by all volunteers.

For more information on change blindness, there is probably no better source than Kevan O'Regan's webpage whose name is closely associated with this phenomenon. For preparing the demo, we actually used many of the images that were used in the original publication. Furthermore, his webpage provides also a rich and original source of information on vision and perception.

Just submitted the following short paper to the 1st Cognitive Computational Neuroscience meeting. I am looking forward to participate!

Model-Based Fixation-Pattern Similarity Analysis Reveals Adaptive Changes in Face-Viewing Strategies Following Aversive Learning

Lea Kampermann (
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Niklas Wilming (
Department of Neurophysiology and Pathophysiology
University Medical Center Hamburg-Eppendorf

Arjen Alink (
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Christian Büchel (
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf

Selim Onat (
Department of Systems Neuroscience
University Medical Center Hamburg-Eppendorf


Learning to associate an event with an aversive outcome typically leads to generalization when similar situations are encountered. In real-world situations, generalization must be based on the sensory evidence collected through active exploration. However, our knowledge on how exploration can be adaptively tailored during generalization is scarce. Here, we investigated learning-induced changes in eye movement patterns using a similarity-based multivariate fixation-pattern analysis. Humans learnt to associate an aversive outcome (a mild electric shock) with one face along a circular perceptual continuum, whereas the most dissimilar face on this continuum was kept neutral. Before learning, eye-movement patterns mirrored the similarity characteristics of the stimulus continuum, indicating that exploration was mainly guided by subtle physical differences between the faces. Aversive learning increased the dissimilarity of exploration patterns. In particular, this increase occurred specifically along the axis separating the shock predicting face from the neutral one. We suggest that this separation of patterns results from an internal categorization process for the newly learnt harmful and safe facial prototypes.
Keywords: Eye movements; Generalization; Categorization; Face Perception; Aversive Learning; Multivariate Pattern Analysis; Pattern Similarity

To avoid costly situations, animals must be able to rapidly predict future adversity based on actively harvested information from the environment. In humans, a central part of active exploration involves eye movements, which can rapidly determine what information is available in a scene. However, we currently do not know the extent to which eye movement strategies are flexible and can be adaptive following aversive learning.

We investigated how aversive learning influences exploration strategies during viewing of faces that were designed to form a circular perceptual continuum (Fig. 1A). One randomly chosen face along this continuum (CS+; Fig. 1, red, see colorwheel) was paired with a mild electric shock, which introduced an adversity gradient based on  physical  similarity  to  the

Figure 1: (A) 8 exploration patterns (FDMs, colored frames) from a representative individual overlaid on 8 face stimuli (numbered 1 to 8) calibrated to span a circular similarity continuum across two dimensions. A pair of maximally dissimilar faces was randomly selected as CS+ (red border) and CS– (cyan border; see color wheel for color code). The similarity relationships among the 8 faces and the resulting exploration patterns are depicted as two 8×8 matrices. (B-E) Multidimensional-scaling representations (top row) and the corresponding dissimilarity matrices (bottom row) depicting four possible scenarios on how learning could change the similarity geometry between the exploration maps (same color scheme; red: CS+; cyan: CS–). These matrices are decomposed onto covariate components (middle row) centered either on the CS+/CS– (specific component) or +90°/–90° faces (unspecific component). A third component is uniquely centered on the CS+ face (adversity component). These components were fitted to the observed dissimilarity matrices, and model selection procedure was carried out.

CS+ face. The most dissimilar face (CS–; Fig. 1, cyan) separated by 180° on the circular continuum was not reinforced and thus stayed neutral. Using this paradigm, we were able to investigate how exploration strategies were modified by both the physical similarity relationships between faces, and the adversity gradient introduced through aversive learning.

We used a variant of representational similarity analysis (Kriegeskorte, Mur, & Bandettini, 2008) that we term “fixation-pattern similarity analysis” (FPSA). FPSA considers exploration patterns as multivariate entities and assesses between-condition dissimilarity of fixation patterns for individual participants (Fig. 1A). We formulated 4 different hypotheses (Bottom-up saliency, increased arousal, adversity categorization, adversity tuning) based on how aversive learning might alter the similarity relationships between exploration patterns when one face on the continuum started to predict adversity (Fig. 1B-E).

Before learning, eye movement patterns mirrored the similarity characteristics of the stimulus continuum, indicating that exploration was mainly guided by subtle physical differences between the faces. Aversive learning resulted in a global increase in dissimilarity of eye movement patterns following learning. Model-based analysis of the similarity geometry indicated that this increase was specifically driven by a separation of patterns along the adversity gradient, in agreement with the adversity categorization model (Fig. 1D). These findings show that aversive learning can introduce substantial remodeling of exploration patterns in an adaptive manner during viewing of faces. In particular, we suggest that separation of patterns for harmful and safe prototypes results from an internal categorization process operating along the perceptual continuum following learning.


Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience. Frontiers in Systems Neuroscience, 2.


Talk given at the EMHFC Conference

I gave this talk at the European Meeting on Human Fear Conditioning about "Temporal Dynamics of Aversive Generalization".

Temporal Dynamics of Aversive Learning and Generalization in Amygdala
The amygdala is thought to orchestrate coordinated bodily responses important for the survival
of the organism during threatening situations. However its contribution to the generalization of
previously learnt aversive associations is not well understood. As amygdala responses in the
context of fear conditioning are temporally phasic and adapt quickly, we designed a new
paradigm to investigate its temporal dynamics during fear generalization. We used faces that
formed a perceptual circular similarity continuum, allowing us to gather two-sided generalization
gradients. While one face predicted an aversive outcome (UCS), the most dissimilar face was
kept neutral. Importantly, participants were compelled to learn these associations throughout the
fMRI recording which they started naive, allowing us to collect temporally resolved
generalization gradients for BOLD and skin-conductance responses. Following fMRI, we
evaluated subjective likelihood for single faces to be associated with UCS, and complemented
these with behavioral measurements using eye-movement recordings to assess how the saliency
associated with faces were modified. Aversive generalization in the amygdala emerged late
during the task, and temporal dynamics were characterized by low learning rates. We observed
significant differences in amygdala responses for participants who exhibited a behavioral effect
in addition to verbal ratings of UCS likelihood. Amygdalar responses contrasted with temporal
dynamics in the insula where generalization gradients emerged earlier and gradually increased
with higher learning rates, similar to skin-conductance responses. Overall our results imply a
weak and late contribution of the amygdala to aversive generalization, in comparison to insular
responses that are stronger and contribute early during learning.

Adaptive Changes in the Viewing Behavior of Faces Following Aversive Learning....

I decided to write few paragraphs about papers I will be publishing from now on. This will be targeted for the non-technical audience and I hope will increase the accessibility to the published results. 

Here is our latest work that shows how eye-movement patterns during viewing of faces are modified when people learn to associate faces with an aversive outcome.

Eye movements can be effortlessly recorded while humans are engaged in different situations. This can provide important insights on what the nervous system tries to achieve, as eye-movements represent the final behavioral outcome of many complex neuronal processes, which are difficult to record and understand. 

We measured eye-movements while humans were viewing faces, and analyzed the resulting exploration patterns. These faces were calibrated to have a known similarity relationship. For example, faces A, B and C were physically organised in such a way that B was perceived equally similar to A and C, whereas A and C were the most dissimilar pairs. First, using novel similarity-based analyses we show that exploration patterns are dominated by physical aspects of faces. That is, the physical similarity relationship between A, B and C could be estimated to a good degree from the similarity of eye-movement patterns that were generated during viewing of these faces. 

Following in the experiment, we selected one face to be a nasty one by associating its presentation with a mild electric current on the hand of volunteers to generate an unpleasant feeling without hurting them. Participants learnt to associate this unpleasant outcome with only one face while other faces were kept the same as before. This resulted in a gradient of unpleasantness that was not present before, and led volunteers to generalize this unpleasant association to other faces to the extent they were perceived similar with the nasty face. This is a classic phenomenon known as generalization, since early times of Pavlov.

How does this new situation modify the similarity relationships between exploration patterns? Following learning, similarity relationships of eye movement patterns started to mirror this newly learnt categories of nasty vs. safe faces, even though there were no physical changes associated with faces. This is compatible with the idea that following learning along an arbitrary continuum of stimuli, a categorization process occurs internally that distinguishes safe from nasty faces. This then biases eye-movement patterns during viewing of faces in such a way to collect information specifically associated with the safe and nasty prototypes, leading faces resembling to these prototypes to be scanned similarly.  

This study provides a nice illustration on how eye movements patterns can shed light onto neuronal processes and help us understand what the brain is trying to achieve during learning.

Eye movement patterns on 8 different, but similar faces that were carefully calibrated to form a similarity continuum. These maps show the most attended locations for a single participant before learning. Similarity analysis of these heatmaps using FPSA method can detect learning induced changes in the scanning behavior.


Aversive Learning Changes Face-Viewing Strategies, as Revealed by Model-Based Fixation-Pattern Similarity Analysis. Lea Kampermann, Niklas Wilming, Arjen Alink, Christian Buechel, Selim Onat.

All content in this post released under CC-BY 4.0.