I'm new to optseq and to rapid E-R designs generally.  I have a two
questions about it.  If it helps, here's my (tentative) design - I'm
creating a task with 5 conditions (including rest) with a TR of 2320 ms
and trial SOA of 4640 ms (my stimuli are on screen for 1 sec of this time).

1) I've been trying to get clear on the logic behind optimizing rapid
E-R designs.  My read of this literature has led me to the conclusion
that there are two overlapping ways to do it: A) vary SOA from one trial
to the next, without necessarily paying much attention to trial
ordering, and B) vary the trial ordering in a maximally efficient way
while keeping the SOA constant from one trial to the next (this is in
effect a "jittering", if one considers the variations in time between
trial types that results from the pseudorandomization).  One can of
course combine A and B, but if my logic holds and read of the literature
is correct, you could stick with one or the other and still wind up with
a sufficiently optimized design.  Is this logic and conclusion correct?
 If so, your program creates a time series via approach B and not A, yes?

2)I'm somewhat familiar with the concept of efficiency in the context of
rapid E-R designs, but I haven't found a good resource that tells me
what to do with that statistic, as implemented in your program.  In
other words, what is a passable efficiency threshold for optseq2?  I
imagine that, like many parameters one selects in MRI, it might depend
on any number of factors... but is there any heuristic to apply to the
efficiency number generated by your program to figure out what's good
and what's bad?

Thanks,

John

_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

Reply via email to