Hi fellow PyMVPA users,

I have a non-software-related question for you all. Imagine a scenario where
there are N runs in a scanning session and a searchlight is used to compute
transfer error from an N-fold cross validation over all voxels. If there are
only two categories you are trying to classify, how would interpret large
spatially contiguous clusters of voxels that are performing significantly
below chance (20% correct or so) appearing in the results? Could it be that
there is something changing in the data in relation to the number of times
the subject has seen examples of a category? If mis-classification could be
caused by an across run repetition-suppression type effect, would re-running
the searchlights with an odd-even split and seeing if these voxels are at
chance be a legitimate way to show there is meaning in the
mis-classification of my svm's?

I haven't been able to find any neuroimaging papers that address or report
below-chance performance, does anyone know of one? Maybe it would be better
to search the machine-learning literature?

I hope to hear your thoughts on this

Thanks,
Matt
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to