Re: [pymvpa] effect size (in lieu of zscore)

2012-01-06 Thread Mike E. Klein
Hi all, Thanks again for the continued help. I think the skews are for the reasons suggested hereā€¦ I'm seeing them in a couple subjects in the positive direction as well as the negative, and I've made quite certain that my preprocessing is correct. I'll be mindful of the histos and, for now,

Re: [pymvpa] effect size (in lieu of zscore)

2012-01-04 Thread Yaroslav Halchenko
sorry for being silent On Tue, 03 Jan 2012, Mike E. Klein wrote: (1) I haven't done a permutation test. By chance distribution I just meant the bulk of the data points using my real-label-coded data. While I'm obviously hoping for a histogram that contains a positive skew, at

Re: [pymvpa] effect size (in lieu of zscore)

2012-01-04 Thread J.A. Etzel
On 1/4/2012 3:20 PM, Mike E. Klein wrote: I have toyed with a bit of ROI MVPA: found some accuracies that were above-chance, though I'm not sure if they were convincingly so. You're suggesting that it should run an analysis with permuted labels on, for example A1 and another area, and then look

Re: [pymvpa] effect size (in lieu of zscore)

2012-01-04 Thread J.A. Etzel
I had similar feeling -- performance distributions should be pretty much a mixture of two: chance distribution (centered at chance level for that task) and some interesting one in the right tail, e.g. as we have shown in a toy example in

Re: [pymvpa] effect size (in lieu of zscore)

2012-01-03 Thread Mike E. Klein
Hi Jonas and Jo, Thanks for helping out with this! So: (1) I haven't done a permutation test. By chance distribution I just meant the bulk of the data points using my real-label-coded data. While I'm obviously hoping for a histogram that contains a positive skew, *at worst* I'd expect a normal

Re: [pymvpa] effect size (in lieu of zscore)

2012-01-03 Thread J.A. Etzel
Ah, rather different problem than I'd thought. Below-chance accuracies are a big problem with fMRI data ... sometimes they happen when data is poorly fitted (e.g. improper scaling), sometimes with mistakes (e.g. mislabeled cases, unbalanced training data), sometimes for no apparent reason.

Re: [pymvpa] effect size (in lieu of zscore)

2011-12-23 Thread Mike E. Klein
Just a couple of updates and questions: 1. For this one particular subject, I'm still seeing the strange negative peak to the chance distribution, even without any z-scoring. The shape looks remarkably similar with or without zscoring (whether I use the raw values or the effect sizes as input). I

Re: [pymvpa] effect size (in lieu of zscore)

2011-12-22 Thread Yaroslav Halchenko
On Tue, 20 Dec 2011, Mike E. Klein wrote: Does anyone know what could cause a negative shift in the searchlight accuracy distribution for a 2-category Linear SVM classifier of single subject? I'm seeing this in a couple of my subjects following a switch to usingĀ zscore(dataset,

Re: [pymvpa] effect size (in lieu of zscore)

2011-12-20 Thread Mike E. Klein
A separate but related issue: Does anyone know what could cause a negative shift in the searchlight accuracy distribution for a 2-category Linear SVM classifier of single subject? I'm seeing this in a couple of my subjects following a switch to using zscore(dataset, chunks_attr='chunks',

[pymvpa] effect size (in lieu of zscore)

2011-12-16 Thread Mike E. Klein
Hi all, I'm wondering if someone could point me in the direction of calculating the effect sizes of voxels in time series against the series' baseline conditions. Ideally over multiple experimental chunks/runs. For reasons that I simply can't figure out, zscore-ing my data *always* brings down

Re: [pymvpa] effect size (in lieu of zscore)

2011-12-16 Thread Yaroslav Halchenko
before discussion kicks in -- out of curiosity... what happens if you either do nested cross-validation to choose C parameter or just set it a bit higher (e.g. C=-5 to still be scaled according to the data), what if you do zscoring across full time series (not just baseline condition) -- for both

Re: [pymvpa] effect size (in lieu of zscore)

2011-12-16 Thread Mike E. Klein
Thanks for the response! I just took a really quick first look (just a single 2-way comparison for a single subject): Looking between my experimental conditions, it looks like setting C=-5 and/or using *zscore(dataset, chunks_attr='chunks', dtype='float32') *leads to accuracies that are a bit