Thank you for your prompt response. "not sure why cross-validation doesn't fit your needs here. Could you elaborate a bit more on what you are trying to achieve."
I'll put it bluntly. I am trying to script a way for the algorithm to create a comparison pattern based on one set of training-stimuli, then disregard this set of stimuli and compare the pattern formed to a different set of test-stimuli, which will also be of different kind, but should produce similar activation patterns in the ROIs scrutinized. "Do you mean that you want to train with all examples from one dataset, then test with all examples from a different dataset?" This is exactly it. "You could always combine the two datasets as the two cross-validation folds and get per-fold results, no?" This is an option we have considered (and will take upon if we find no better solution), to have all the stimuli fall under the same dataset to make cross-validation viable. However we believe that the predictions given on the confusion tables would not be as ideal as if the comparison could be performed as described above. Thanks again for any help you can offer us. J. On Wed, Oct 24, 2012 at 3:16 PM, Francisco Pereira < [email protected]> wrote: > You could always combine the two datasets as the two cross-validation > folds and get per-fold results, no? > > Francisco > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > [email protected] > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >
_______________________________________________ Pkg-ExpPsy-PyMVPA mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

