Hi all. I'm making progress in my efforts to understand how to do things in PyMVPA, and I have some more questions.
I have an analysis problem for which I need a localizing answer -- i.e., I need some index of how useful each voxel is in contributing to the prediction of behavior. I've had good results with searchlight-style analyses, and now I'd like to try some new things that are fairly basic, but new for me. The dataset I'm working with is fairly coarse structural data -- the intrinsic smoothness is very high, and there are many identical voxels (some not contiguous). I'd like to draw on the whole brain to do prediction (regression), but eventually map the feature sensitivities back onto voxels. My instinct is to do something like PCA to reduce the dataset to a workable number of features, train my model on those, and reverse map a sensitivity measure back onto voxels using the component weights. For the time being, I'm willing to restrict myself to linear models. (Eventually, I will likely feed this into a permutation test, but probably outside of PyMVPA.) Whether or not this is liable to be the best approach, it at least seems like something I should know how to do in PyMVPA. I took a look at the documentation for PCAMapper, and then at the section on "Data Mapping," did some simple searches on the mailing list, and eventually decided it would be wiser just to ask. Can anyone suggest how best to go about this? I'm also happy to hear warnings about why this would be a bad idea. Thanks, dan _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list [email protected] http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

