Selim Onat

I am a neuroscientist working currently on how humans make generalizations based on what they have previously learnt. To do so, I am using a variety of methodologies including fMRI (1), autonomous (2), as well as eye-movement recordings (3).

This research emanates from the well-established field of "stimulus generalization" following mainly the "lineage" of Hovland, Hull and Roger Shepard (4), and including the more recent computational work of Josua Tenenbaum (5). Furthermore, it integrates work on anxiety disorders, as it is believed that these mechanisms are impaired in people suffering from anxiety problems.

In the past, I have been working on how the nervous system processes natural scenes both at the electrophysiological and sensory-motor level. Since the times of Hubel and Wiesel, visual processing had been
overwhelmingly studied with artificial stimuli such as moving edges. However this type of stimuli suffer from an ecological validity problem, as they only rarely occur in real-life. We therefore investigated cortical processing during viewing of natural movies. This previous work focused on visual processing using mostly the technique of voltage-sensitive dye imaging and eye-tracking.

Conventions on Folder Structures for Storing Experimental Data vs. Object-Oriented Programming

What is the best folder structure to store data recorded during an experiment? Is this an important question? This seems to be a question that stirs up debates and some people  propose here a folder structure. In the fMRI side of the experimental spectrum (without breaking generality to other domains), we typically record enormous amounts of data per participant covering BOLD acquisition, raw skin-conductance recordings, eye-movements, and what not.

However, I actually think, this is not necessary question. Why? Because using object-oriented programming, one can simply design a data object (for example an object for single subjects) that simply knows where the data is located.

In OOP, we start by defining some properties that our object will need to have. Therefore, this view already enforces us to plan beforehand what we would like to achieve with a given object we are programming. For example, if our aim is to define an object for representing individual subjects recorded in an experiment (a Subject object 😀), this object might have the following properties defined:

classdef Subject < Project
        id                           %subject ID
        path                         %path to the participant


And, my Subject object for the recorded participant number 5 could be constructed as follows:

>> s = Subject(5)
Subject 05 (/Users/onat/Documents/project_FPSA_FearGen/data//sub005/)
s = 
  Subject with properties:

                id: 5
              path      : '/data//sub005/'
              path_fmri : '/data/sub005/run000/fmri'
              path_eye  : '/data//sub005/run000/eye'
              path_scr  : '/data//sub005/run000/scr'

path_fmri, path_eye, path_scr are all properties of participants. And their values are automatically filled in during the construction of the Subject object when I called Subject with the argument 5.

This data has been filled in by the methods that are included in the definition of the Subject object. For example, a single-liner method called get.path_fmri fills in the path to the fmri data for this subject. This could be easily changed or adapted to fit another data set (given that is consistent across participants).

Therefore according to my view, the key is not to settle on a convention that everybody will agree for folder structures, but rather to use intelligent objects to represent scientific data, that already know  where things are stored. This is one of the enormous benefits OOP introduces for the organization and analysis of scientific data.