Thanks for looking at the data.

1- Since the data has already been acquired, I can't change that. For our
new study we are setting up, we'll definitely use isotropic resolution, and
we'll probably acquire a fieldmap to correct for epi distortions.

FSL has some recommendations at FDT/FAQ
<http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FDT/FAQ> and there are also
recommendations from www.birncommunity.org
<http://www.birncommunity.org/resources/supplements/brain-morphometry-multi-site-studies/best-practices-for-brain-morphometry-imaging/>
.

2 - I am using the longitudinal processing stream. I usually create a bash
script to call trac-all -path with the dmrirc.txt file listing the sessions
(timepoints) and the baselist for a single subject. I can then have a few
scripts for different subjects running concurrently. Looking at Len_Avg for
rh.cst for one subject, there seems to be more variability between -path
runs than between session/timepoints, even when sessions are on different
scanners (1.5T vs 3T).

Specifically (using 7500 iterations), for one run, Len_avg of rh.cst was
41.8583. The value was the same to 4 decimal points across 6 different
sessions, 4-3T scans and 2-1.5T scans. On a separate run, Len_avg was
50.0967, also agreeing to 4 decimal places on the 6 different scans.

We definitely need to run longer than the default iterations, but do all
the subjects need to be run at the same time (listed in a single dmrirc
file)? Within the longitudinal stream, during trac-all -path, there seems
to be initialization with priors from the base space. Is there some other
interaction between timepoints/sessions/subj_list causing session-similar
output, but run variability? I'm wondering if they use the same random
number seed for the mcmc algorithm?

Peggy

On Fri, Jun 19, 2015 at 3:04 PM, Anastasia Yendiki <
[email protected]> wrote:

>
> Hi Peggy - Thanks for sending me the data. The tract reconstructions look
> much different than what I'm used to seeing, from other users or our own
> data. Visually, it seems to me like the 100K one is more converged. The
> probability distributions of the pathways look much sharper, so when
> thresholded you don't get much of the tails of the distribution, which
> would be noisier. It's hard to tell why it's taking so much longer to
> sample the distributions in your data. The only thing I see that stands out
> is that the resolution in z is quite low (3mm), so some of the tracts (see
> for example cingulum, corpus callosum) are only 1 voxel thick in z. This
> combined with partial voluming (ventricles seem enlarged) probably
> introduces more uncertainty in the data. If you can change the aqcuisition,
> I'd recommend isotropic resolution (the usual is 2mm) as anisotropic voxels
> introduce bias in estimates of diffusion anisotropy. If you have to stick
> with the existing acquisition, it looks like the default sampling settings
> will have to be changed for your case. I'm happy to help troubleshoot
> further.
>
> For a longitudinal study, I recommend using the longitudinal tracula
> stream. We've found that it improves test-retest reliability in
> longitudinal measurements substantially, while also improving sensitivity
> to longitudinal changes (the paper is in review).
>
> Best,
> a.y
>
> On Fri, 19 Jun 2015, Peggy Skelly wrote:
>
>  It's been a while, but we are still working on computing our tracts in
>> tracula!
>>
>> We are still struggling with variability of the output from 'trac-all
>> -path'.  Running with the default number of iterations, there was enough
>> variability in the output from multiple runs, that I ran longer iterations
>> to see if the output would converge to more consistent values. Here is the
>> fa_avg_weight of the rh.cst for a single set of dwi images processed
>> through
>> tracula, then trac-all -path run several times (with reinit=0 and default
>> values for nburnin & nkeep):
>>
>> nsample=7500: 0.516818, 0.529206, 0.495232, 0.514368, 0.492393, 0.51711
>> nsample=20,000: 0.521082, 0.513492, 0.50974
>> nsample=50,000: 0.506167, 0.502324, 0.504106
>> nsample=100,000: 0.530423
>>
>> Between 7500 and 50k samples, it does seem like the algorithm is
>> converging,
>> but at 100k samples the output is out of the range of any previous runs.
>> (I
>> keep looking for errors in how I ran that one, and am running it again.)
>>
>> In the reference Yendiki, et.al, 2011, you state that the burn-in and
>> iterations to ensure convergence is for future investigation. Have you had
>> any more thoughts or progress on this topic?
>>
>> We are doing a longitudinal analysis, comparing FA_avg_weight over tracts
>> pre- and post- a 3-month therapeutic intervention. So we anticipate rather
>> small changes. Do you have any suggestions for how we handle this
>> variability?
>>
>> Thanks,
>> Peggy
>>
>>
>>
> _______________________________________________
> Freesurfer mailing list
> [email protected]
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>
_______________________________________________
Freesurfer mailing list
[email protected]
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

Reply via email to