So the zscoring/averaging order mixup is absolutely what caused these strange distributions… swapping the order or removing zscoring returns them to normal-ish (with the zscored results having a much narrower distribution). That being said, I can't get my mind around why inverting the zscore/averaging would lead to this particular strange result: not only were accuracies increased high the board, but my observed searchlight accuracy peaks were right around the hypothesized region of interest. In take 2 of the analysis, at least initially, zscoring (at the correct time) seems to obliterate any effect.. although it appears to return a bit if I leave out the zscoring completely. I'm also wondering if there's relatively simple way to compromise between raw 4D files and standard scores by just going for effect sizes.
Thanks again, Mike On Thu, Oct 20, 2011 at 4:10 PM, Yaroslav Halchenko <[email protected]>wrote: > > On Thu, 20 Oct 2011, Mike E. Klein wrote: > > I wasn't getting that error… at the time of zscoring I hadn't yet > > removed category #3, so it was seeing -exactly- 3 samples per chunk. > > aha... indeed... may be we should warn if number of samples per chunk < > some reasonable number, e.g. at least 5 > > -- > =------------------------------------------------------------------= > Keep in touch www.onerussian.com > Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > [email protected] > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >
_______________________________________________ Pkg-ExpPsy-PyMVPA mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

