hi all,
with my today's efforts I got down to 13 errors and 15 failures out
of 284 tests, so it's getting better :)
on the same branch the same set of unittests passes without errors
or failures with python 2.7, so I assume I did not break anything
important.
I attach here the error logs in
Dear PyMVPA experts,
Isn't a leave-one-out cross-validation supposed to produce a smaller bias
yet a larger variance in comparison to N-fold cross-validations when N# of
samples?
I ran a sanity check on binary classification of 200 random samples. 4-fold
cross-validations produced unbiased
if we were to talk about bias we would talk about classification of true
effects ;)
you are trying to learn/classify noise on disbalanced sets -- since you
have 'events' == range(200), each sample/event is taken out
separately you have 100 of one target (say 1) and 99 of the other (say
0). Since
Thanks Yaroslav! The previous results make sense now.
I have a related question: After feature selection on totally random
samples, my binary classification accuracy was significantly better than
chance (50%). For MVPA with feature selection on real fMRI data, how do we
know better-than-chance
Ping-Hui Chiu chiuping...@gmail.com wrote:
ds=fsel(ds)
ds_chunks=cv_chunks(ds)
Because you double dip here: feature selection should be trained only on
training portion of the data
Use FeatureSelectionClassifier
I would also recommend to go through or tutorial which highlights such cases
--
thank you Tiziano!
I have started to glance over the changes -- lots of nice ones -- thanks again,
and we should finalize/merge asap - may be some skype/shared screen
sprintee next week over remaining issues etc?
There is a bulk of changes in some guys feature branches and if we do not
merge
6 matches
Mail list logo