Hi all, The classifiers in PyMVPA all seem to be targeted at classifying patterns into nominal categories. Is there anything that can be done when you have a linear "category"?
So for example, in some data I'm exploring we have a DV we are using to classify patterns. The DV has a range (for example purposes let's say 100-200), and a number of observations at each level. When we train a classifier as if these category labels were nominal, it makes predictions based on observed categories. So for example, if there were observations at 100 and some at 105, it would predict either 100 or 105 for a new observation but never 101, 102, 103, or things like 102.5, etc. Ideally, I'd like to find a way to train the classifier such that it can make predictions that are in between observed patterns, essentially allowing it to predict anything between 100-200, even if it hasn't observed a pattern at each and every point along this scale. Is this possible? One idea I had was to use regression and PCA. So, I could try to do some data reduction with PCA, and then regress the linear category values on the factor loadings from the first few components of the PCA for a training set, then use this regression model to predict a test set. If I did this as a searchlight across the brain, I might be able to find areas that are predominantly representing this 100-200 scale and assess the degree of accuracy of the predictions using the mean residual errors from their regression predictions. If there are better or more established ways to do this I'd really appreciate some info. I haven't been able to find much on this type of situation and am not familiar with anyone who's published anything about this. Thanks in advance Cheers, Jason
_______________________________________________ Pkg-ExpPsy-PyMVPA mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

