Selim Onat

I am a neuroscientist working currently on how humans make generalizations based on what they have previously learnt. To do so, I am using a variety of methodologies including fMRI (1), autonomous (2), as well as eye-movement recordings (3).

This research emanates from the well-established field of "stimulus generalization" following mainly the "lineage" of Hovland, Hull and Roger Shepard (4), and including the more recent computational work of Josua Tenenbaum (5). Furthermore, it integrates work on anxiety disorders, as it is believed that these mechanisms are impaired in people suffering from anxiety problems.

In the past, I have been working on how the nervous system processes natural scenes both at the electrophysiological and sensory-motor level. Since the times of Hubel and Wiesel, visual processing had been
overwhelmingly studied with artificial stimuli such as moving edges. However this type of stimuli suffer from an ecological validity problem, as they only rarely occur in real-life. We therefore investigated cortical processing during viewing of natural movies. This previous work focused on visual processing using mostly the technique of voltage-sensitive dye imaging and eye-tracking.

kinect camera to control stop-motion video flow

In this post I would like to explore embodied ways on how one could interact with a visual stream. More specifically, this is about controlling the flow of a video stream with one own's body movements.

The system I am using is based on Microsoft's Kinect camera. On the computer side I am using the Quartz Composer to connect different streams of information on the one side body related parameters and on the other parameters related to the video stream.

In the following video, I am showing a simple situation where the movie is controlled by vertical movements of my hand.

Kinect controlled video playing from sonat on Vimeo.