Hi experts,

I am talking about basic pattern classification (e.g. no feature selection
etc). SVM algorithm (with built-in regularization).

1. A small number of data points with large dimension (ROI size)  can cause
overfitting, which is  high prediction on training set and bad test set.
Now, suppose, I have a beyond chance classification on test set, which was
validated using within subject permutation test and across subjects t-test
vs. chance. Can my results be still unreliable? If so, how can I test it?

2. Practically, is 10 independent data points (averaged block value or beta
values) with the ROI of 100 voxels is safe enough?

3. Do you know about any imaging papers which tested / discussed this issue?

Thanks for ideas,
Vadim
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to