Hi all,
Thanks again for the continued help. I think the skews are for the reasons
suggested hereā¦ I'm seeing them in a couple subjects in the positive
direction as well as the negative, and I've made quite certain that my
preprocessing is correct. I'll be mindful of the histos and, for now,
sorry for being silent
On Tue, 03 Jan 2012, Mike E. Klein wrote:
(1) I haven't done a permutation test. By chance distribution I just
meant the bulk of the data points using my real-label-coded data. While
I'm obviously hoping for a histogram that contains a positive skew, at
On 1/4/2012 3:20 PM, Mike E. Klein wrote:
I have toyed with a bit of ROI MVPA: found some accuracies that were
above-chance, though I'm not sure if they were convincingly so. You're
suggesting that it should run an analysis with permuted labels on, for
example A1 and another area, and then look
I had similar feeling -- performance distributions should be pretty
much a mixture of two: chance distribution (centered at chance level
for that task) and some interesting one in the right tail, e.g. as we
have shown in a toy example in
Hi Jonas and Jo,
Thanks for helping out with this!
So:
(1) I haven't done a permutation test. By chance distribution I just
meant the bulk of the data points using my real-label-coded data. While I'm
obviously hoping for a histogram that contains a positive skew, *at worst* I'd
expect a normal
Ah, rather different problem than I'd thought. Below-chance accuracies
are a big problem with fMRI data ... sometimes they happen when data is
poorly fitted (e.g. improper scaling), sometimes with mistakes (e.g.
mislabeled cases, unbalanced training data), sometimes for no apparent
reason.
Just a couple of updates and questions:
1. For this one particular subject, I'm still seeing the strange negative
peak to the chance distribution, even without any z-scoring. The shape
looks remarkably similar with or without zscoring (whether I use the raw
values or the effect sizes as input). I
On Tue, 20 Dec 2011, Mike E. Klein wrote:
Does anyone know what could cause a negative shift in the searchlight
accuracy distribution for a 2-category Linear SVM classifier of single
subject? I'm seeing this in a couple of my subjects following a switch to
usingĀ zscore(dataset,
A separate but related issue:
Does anyone know what could cause a negative shift in the searchlight
accuracy distribution for a 2-category Linear SVM classifier of single
subject? I'm seeing this in a couple of my subjects following a switch to
using zscore(dataset, chunks_attr='chunks',
Hi all,
I'm wondering if someone could point me in the direction of calculating the
effect sizes of voxels in time series against the series' baseline
conditions. Ideally over multiple experimental chunks/runs.
For reasons that I simply can't figure out, zscore-ing my data *always* brings
down
before discussion kicks in -- out of curiosity... what happens if you
either do nested cross-validation to choose C parameter or just set it
a bit higher (e.g. C=-5 to still be scaled according to the data), what
if you do zscoring across full time series (not just baseline condition)
-- for both
Thanks for the response!
I just took a really quick first look (just a single 2-way comparison for a
single subject):
Looking between my experimental conditions, it looks like setting C=-5
and/or using *zscore(dataset, chunks_attr='chunks', dtype='float32') *leads
to accuracies that are a bit
12 matches
Mail list logo