From the Neuroskeptic blog an interesting post on Kriegeskorte et al. in Nature Neurosciences….once again the very difficult problems inherent in doing statistics on this type of data.
But it would be wrong to think that this is a problem with fMRI alone, or even neuroimaging alone. Any neuroscience experiment in which a large amount of data is collected and only some of it makes it into the final analysis is equally at risk. For example, many neuroscientists use electrodes to record the electrical activity in the brain. It’s increasingly common to use not just one electrode but a whole array of them to record activity from more than brain one cell at once. This is a very powerful technique, but it raises the risk the non-independence error, because there is a temptation to only analyze the data from those electrodes where there is the “right signal”, as the author’s point out:
In single-cell recording, for example, it is common to select neurons according to some criterion (for example, visual responsiveness or selectivity) before applying
further analyses to the selected subset. If the selection is based on the same dataset as is used for selective analysis, biases will arise for any statistic not inherently independent of the selection criterion.