Selim Onat

I am a neuroscientist working currently on how humans make generalizations based on what they have previously learnt. To do so, I am using a variety of methodologies including fMRI (1), autonomous (2), as well as eye-movement recordings (3).

This research emanates from the well-established field of "stimulus generalization" following mainly the "lineage" of Hovland, Hull and Roger Shepard (4), and including the more recent computational work of Josua Tenenbaum (5). Furthermore, it integrates work on anxiety disorders, as it is believed that these mechanisms are impaired in people suffering from anxiety problems.

In the past, I have been working on how the nervous system processes natural scenes both at the electrophysiological and sensory-motor level. Since the times of Hubel and Wiesel, visual processing had been
overwhelmingly studied with artificial stimuli such as moving edges. However this type of stimuli suffer from an ecological validity problem, as they only rarely occur in real-life. We therefore investigated cortical processing during viewing of natural movies. This previous work focused on visual processing using mostly the technique of voltage-sensitive dye imaging and eye-tracking.

Representation of Natural Movies across the Visual Cortex

Below is a video showing the spatio-temporal activity patterns in response to artificial and natural stimuli. These beautiful recordings were realized by Dirk Jancke in his laboratory. We compared the activity patterns evoked by natural movies to those evoked by artificial stimuli (such as for example moving edges) that are typically used in physiological experiments.

We are the first research group recording cortical large-scale activity patterns in response to natural movies using the method of voltage-sensitive dye imaging.



Voltage-sensitive dye imaging during natural and artificial conditions. The first column depicts stimuli as shown during the experiment: drifting square gratings (rows 1 and 2) and natural movies recorded by cats (rows 3 and 4). Colored rectangles indicate the position of receptive fields hand-mapped at each penetration site, symbolized with a color-matching circle in the second column. Evoked optical imaging signals caused by these stimuli are depicted in the second column. The scale bar represents 1 mm across cortex. Note that the color code has different scales across different conditions. The third column depicts the time course of spatially averaged activity. The strength of motion flow field is represented in the last column.




Voltage-sensitive dye recordings of cortical responses to natural stimuli and gratings. (A) Two natural movies (blue and orange boxes) and gratings (gray box) used as stimulation are depicted together with evoked cortical responses. Visual stimuli are shown in upper rows within each box (movie 1 and movie 2). Leftmost image represents an example movie frame covering approximately a visual angle of 30° 3 40°. The scale bar represents 5° of visual angle. White rectangle approximates the local portion that directly stimulated the recorded cortical area. The temporal evolution of the movie within the delineated region is shown in succeeding frames. The second row within each box displays activity during intervals of nonoverlapping 100-ms frames including the prestimulus period. The rightmost image shows the average activity computed over the entire stimulus presentation of 2 s. See top left frame for vascular image of the recorded cortical area (P 5 posterior, L 5 lateral; scale bar represents 1 mm). Color bar indicates activity levels as fractional fluorescence change relative to blank. (B) Time courses of global activity computed as the average across all pixels of a given frame. Shaded gray area symbolizes prestimulus period. Line colors are matched to the boxes shown in A; black 5 grating, blue/red 5 natural conditions. The thickness of lines represents CIs computed by resampling all the pixels that belong to a given frame (P 5 10^5). Right panel: Mean amplitudes of activity; error bars represent the SD.

Few more videos: