To run multiple-subjects tests I've usually converted everyone's functional images to a standard space first (MNI or whatever), then subsetted to only have voxels with non-zero variance in all subjects.

This sometimes works surprisingly well and is fairly straightforward.

Jo



On 1/18/2012 9:30 AM, John Magnotti wrote:
Hi All,

I'm trying to work build a cross-subject analysis using the Haxby et
al data (http://data.pymvpa.org/datasets/haxby2001/). The problem is
that the masks for each subject don't necessarily cover the same
voxels. Poldrack et al. [1] mention using an intersection mask to
ensure they were looking at the same voxels across subjects. Is there
a way to do this in PyMVPA, and should I do something like convert to
standard space beforehand? I could also just use the whole timeseries,
but I think there is still the issue of ensuring that the voxels
"match" across subjects, right?

Any hints or tips would be much appreciated.


Thanks,

John

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to