Selim Onat

I am a neuroscientist working currently on how humans make generalizations based on what they have previously learnt. To do so, I am using a variety of methodologies including fMRI (1), autonomous (2), as well as eye-movement recordings (3).

This research emanates from the well-established field of "stimulus generalization" following mainly the "lineage" of Hovland, Hull and Roger Shepard (4), and including the more recent computational work of Josua Tenenbaum (5). Furthermore, it integrates work on anxiety disorders, as it is believed that these mechanisms are impaired in people suffering from anxiety problems.

In the past, I have been working on how the nervous system processes natural scenes both at the electrophysiological and sensory-motor level. Since the times of Hubel and Wiesel, visual processing had been
overwhelmingly studied with artificial stimuli such as moving edges. However this type of stimuli suffer from an ecological validity problem, as they only rarely occur in real-life. We therefore investigated cortical processing during viewing of natural movies. This previous work focused on visual processing using mostly the technique of voltage-sensitive dye imaging and eye-tracking.

On the number of reviewers

While scientists can and should use sophisticated methods for investigating what they are interested in, the apparent simplicity of the major "quality check" mechanism in scientific publication creates a blatant contrast. Recently one of our submitted articles has received a rejection from the editorial board after receiving comments from two different reviewers. While this is life-as-we-always-knew situation, it made me rethink again, about the subjectivity of the situation.

Imagine you want to buy a product, one has 4,2 stars received from 100 people, and another one has 5 stars from 2 people only. Which one would you go for knowing that scores are bounded between 1 and 5? If you are a person who believes that ratings of other people may be informative, you would go for the first product as a rational consumer. The reason is that the second one has not yet received many ratings and comments, and therefore the evidence that the true score of the product to be as high as 5 is rather low. Whereas, for the first product, there were 100 opinions, suggesting that the average rating that we observe must be close to its true value.

What if there are two similar products reviewed each by either 2 or 3 people, respectively. Which one would you go for? This is a typical situation when your manuscript is reviewed. And the main question I am asking here is as follows:

What is the probability of receiving a score that is close enough to the true score of your product when the number of reviewers are as small as 2?

We can compute this number exactly with few assumptions and I will be later providing maths for that. I believe that the probability of the true score to be located somewhere between 1 and 5 is rather flat when the number of reviewers are as low as 2. Based on this method, I will also provide the optimal number of reviewers for a scientific work that will increase the chances that reviews will detect the true quality of the "product".