Hi everyone,

In a previous discussion on this list 
(http://www.mail-archive.com/freesurfer@nmr.mgh.harvard.edu/msg17681.html), 
Pablo Polosecki was asking the best way to perform hypothesis testing using 
functional data within an ROI.  The final opinion was that it was best to 
average all of the time courses in the ROI, and re-run the analysis from 
scratch, so that all of the appropriate whitening, etc. occurred.

I was considering doing this, creating a small volume (say, 10 voxels total) 
that had each voxel holding the average time course from one of 10 ROIs.  I 
would then set up and run a usual fsfast analysis using these tiny volumes.  I 
am using freesurfer 4.5.  I have a few questions.

1) Is this reasonable?

2) One area where I expect to run into problems is with the whitening.  If I 
set -acfbins to the total number of my dummy voxels, and -acffwhm to 0, will 
this use the autocorrelation function of the average timecourse within each 
individual ROI?  Is that an appropriate approach? How sensitive will this be to 
the number of actual voxels averaged to get each ROI (for instance, would this 
bias me to finding more significant results in ROIs containing fewer voxels?).

3) Are there other steps that depend upon the spatial arrangement of voxels 
that I am forgetting, and will these steps choke on these small volumes (or 
worse, fail in silent but pernicious ways).

Thanks for all of the help,
Clark
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

Reply via email to