Something strange seems to be going on ... after double-checking the code and labels my inclination would be to split up the stimuli: I think the grouping of the stimuli into trials could be causing strange effects.

Since you have a lot of examples you could try pulling apart your trials to investigate the effect of the pairing. Basically, I'd run an analysis with only the stimuli that fell in the first of each pair, doing some sort of into-something-like-four-groups partitioning. Then the same, with only the stimuli in the second of each pair. Then you could include the same number of stimuli (e.g. just the first half of the experiment), but having stimuli that were paired into trials. If these groups have noticeably different accuracies it'd suggest that the pairing of stimuli into trials is causing strange effects.

As always, ensure that the training and testing sets are balanced (equal number of each type).

Do you ever have the case that the same image is put into two summary images? For example, that an image would be one of the last included in the average for the first stimulus of a trial and on the first for the second stimulus of the trial? That could be potentially quite troublesome.

I would plot a random subset of voxel timecourses (i.e. a voxel over the entire run, in temporal order) and mark the occurrences of the trials (i.e. trial start, trial stop). Sometimes these plots really help to spot dependencies/remaining trends.

One last thought - how do the individual subject results look? Have you tried any sort of ROI-based analysis? Picking a few sensible (for your experiment) ROIs (ideally including one or two that *shouldn't* work) might help the troubleshooting.

good luck,
Jo



On 3/1/2011 9:03 AM, Nynke van der Laan wrote:
Hello all,

thanks for all the very useful suggestions, it's nice to have so many
people thinking with me!
I have tried a few suggestions with for each returning the same
results (approximately):
- I have tried a linearNuSVMC, this gives approximately the same results.
- I tried a Odd/even splitter instead of a NFold splitter -->  approx
the same results
- Detrending the data -->  approximately the same results.
- I have randomized the labels of the two categories. This resulted in
the same distribution of accuracies as with the correct labeling (peak
of histogram at 0.6 accuracy)..... Does this mean that there is
contamination across chunks???

The design of the fMRI task is as follows: The task exists of 38
trials = 38 chunks
One trial consists of the following sequence of events:
- 4 sec category 1
- 2 sec fixationcross
- 4 sec category 2
- 2 sec fixationcross
- 4 sec other event (of no importance)
- random inter trial interval between 2 and 12 sec

Thus total trial duration is between 18 and 28 seconds.

So, in each trial/chunk both category events are presented once. The
order of category 1/2 in the trial is randomized. So in some trials
first 1, then 2 or viceversa. Onsets of events category 1 and 2 are
thus 6 seconds apart, but order is randomized so I would not expect
problems. Between chunks is also enough time I would expect...

Indeed, I have used blockaveraging. For this the functional scans
between (approx) 3.6 and 6 seconds after onset of the event are
averaged. TR is 0.61 secs.

Does anyone have any additional suggestions?

Thanks in advance!

Best regards,
Nynke

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to