I'd look at the distribution of the permutation test accuracies. Nice
and normal, centered at 0.5 (assuming 2-class)? If so, then even if the
tails are long (so that 0.6 is in the top 5%) it's probably fine. I've
had wide permutation distributions occasionally, particularly when the
number of samples is small (and it's probably smaller when running the
classification on averaged samples).
Jo
On 5/14/2011 7:35 AM, Vadim Axel wrote:
Hi guys,
I wanted to ask your opinion about some weird result that I get.
To establish the significance I randomly permute my labels and I get a
prediction rate of 0.6 and even above it (p-value=0.05). In other words
5% of of permuted samples result in 0.6+ prediction rate. The
training/test samples are independent and ROI size is small (no
overfitting). Interestingly, the described result I get when I average
trials within block (use one data-point per block; ~25 blocks in total).
When I run the classification on raw trials, my permutation threshold
becomes ~0.55. In both cases for non-permuted labels the prediction is
around significance level.
How should I treat such a result? What might have gone wrong?
Thanks a lot for help,
Vadim
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa