Hi guys,

I wanted to ask your opinion about some weird result that I get.
To establish the significance I randomly permute my labels and I get a
prediction rate of 0.6 and even above it (p-value=0.05). In other words 5%
of of permuted samples result in 0.6+ prediction rate. The training/test
samples are independent and ROI size is small (no overfitting).
Interestingly, the described result I get when I average trials within block
(use one data-point per block; ~25 blocks in total). When I run the
classification on raw trials, my permutation threshold becomes ~0.55. In
both cases for non-permuted labels the prediction is around significance
level.
How should I treat such a result? What might have gone wrong?

Thanks a lot for help,
Vadim
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to