Is there an updated version of mri_mcsim available? When I try to run 
it, all I get back are error messages that none of the option flags are 
valid (e.g., ERROR: Option --base unknown).

Thanks,
Jim

Jim Porter, M.A.
Graduate Student
Clinical Science&  Psychopathology Research
University of Minnesota


On 7/22/64 1:59 PM, James Porter wrote:
> Hello Doug-
>
> When using mri_mcsim, is there an option to reduce the iterations of 
> fwhm values? I'd like to create a database of sims within an in-house 
> average subject as well as within different labels, but I have no need 
> for 30 fwhm values. If I could restrict it to just a few choice 
> values, that would be great.
>
> Jim Porter, M.A.
> Graduate Student
> Clinical Science&  Psychopathology Research
> University of Minnesota
>
>
> On 7/22/64 1:59 PM, Douglas N Greve wrote:
>> use the value in the fwhm.dat. I would always round up to be safe.
>>
>> doug
>>
>> Michael Waskom wrote:
>>> Hi Doug,
>>>
>>> This looks very helpful!  Should we use the fwhm size corresponding 
>>> to the argument of the --fwhm flag when recon-all runs mri_surf2surf 
>>> (for anatomical stats), or round the measure found in the fwhm.dat 
>>> of the glmfit dir?
>>>
>>> Thanks,
>>> Mike
>>>
>>> ---------- Forwarded message ----------
>>> From: Douglas N Greve <gr...@nmr.mgh.harvard.edu 
>>> <mailto:gr...@nmr.mgh.harvard.edu>>
>>> To: port...@umn.edu <mailto:port...@umn.edu>
>>> Date: Tue, 27 Apr 2010 11:17:24 -0400
>>> Subject: Re: [Freesurfer] question about clustering simulation
>>> Yea, that was a bug in the simulation program I used. I did confirm 
>>> that the simulations were done on the correct hemisphere. Feel free 
>>> to edit the files. I'll fix the master when I get a chance.
>>>
>>> doug
>>>
>>> James Porter wrote:
>>>
>>>     Doug-
>>>
>>>     Let me be the first to say thank you for saving me massive amounts
>>>     of simulation time. One bug though: the right hemisphere files say
>>>     they were created using the left hemisphere, and mri_surfcluster
>>>     rejects them. Would it be verboten to just alter the csd files, or
>>>     do you have versions that were created off of fsaverage's right
>>>     hemisphere?
>>>
>>>     matacao:mult-comp-cor porterj$ head
>>>     fsaverage/rh/cortex/fwhm19/abs/th33/mc-z.csd
>>>     # simtype null-z
>>>     # anattype surface  fsaverage lh
>>>     # FixGroupSubjectArea 1
>>>     # merged      0
>>>     # contrast    NA
>>>     # seed        1271355821
>>>     # thresh      3.300000
>>>     # threshsign  0.000000
>>>     # searchspace 74490.928733
>>>     # nullfwhm    19.000000
>>>
>>>     Jim Porter, M.A.
>>>     Graduate Student
>>>     Clinical Science&  Psychopathology Research
>>>     University of Minnesota
>>>
>>>
>>>     On 7/22/64 1:59 PM, Douglas N Greve wrote:
>>>
>>>         1. Correct on both counts. When I wrote the simulation, I was
>>>         only trying to replicate the random fields analysis. But with
>>>         a simulation, you have more freedom that I am not yet 
>>> exploiting.
>>>         2. This is what we are already doing with mc-z
>>>         3. I'm working on this as well. It turns out that the random
>>>         fields approximation works a lot better when using the number
>>>         of vertices.
>>>
>>>         Also, I've run mc-z simulations under a bunch of thresholding
>>>         and FWHM conditions for whole-hemisphere cortex labels. These
>>>         will be integrated in new version of  FS, but I've put them
>>>         here
>>>         
>>> ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/mult-comp-cor.tar.gz
>>>  
>>>
>>>         as well. There's a README file in there. This will make
>>>         running your own time-consuming simulations unnecessary (when
>>>         using the cortex mask at least).
>>>
>>>         doug
>>>
>>>         Anthony Dick wrote:
>>>
>>>             Hello all,
>>>
>>>             I am interested in using the mri_glmfit simulation to
>>>             control for multiple comparisons in data I have run on the
>>>             surface in AFNI. Before doing this, I have a few questions:
>>>
>>>             1. What does the simulation with the mc-z flag do,
>>>             exactly? It claims to be comparable to AFNI's AlphaSim,
>>>             but it takes a maximum cluster area for each iteration,
>>>             which is not exactly what AlphaSim does. Here is my guess:
>>>
>>>             Given a surface, a given smoothness of the data, and a
>>>             given per-vertex threshold, for each iteration the
>>>             simulation populates that surface with random data taken
>>>             from a normal distribution, thresholds the data, and
>>>             applies the smoothness of the actual data (supplied as an
>>>             input parameter). It then computes the maximum cluster
>>>             size in area for that "image". Doing this n iterations
>>>             gives a distribution of maximum cluster sizes that occur
>>>             for random data of a given smoothness, and taking cluster
>>>             sizes above a certain percentile rank controls for the FWE
>>>             at a level equal to that percentile rank (e.g., 95th%
>>>             controls for FWE = .05). AlphaSim does something similar,
>>>             although instead of taking maximum cluster sizes at each
>>>             iteration it computes all given cluster sizes. AlphaSim
>>>             also allows for different cluster connectivity radius, but
>>>             it seems Freesurfer computes only for neighboring
>>>             vertices. All in all, if this is correct, it seems like a
>>>             good implementation.
>>>
>>>             2. It is my understanding that one could bypass running
>>>             the glm in Freesurfer and only compute the simulation, as
>>>             the simulation only needs information about the surface,
>>>             and the smoothness of the data (which are supplied by the
>>>             user). To do so, you have to "fake out" Freesurfer to
>>>             bypass glm, but that turns out to be pretty painless.
>>>
>>>             3. In a future distribution, is it possible to modify this
>>>             procedure to also output maximum cluster sizes in terms of
>>>             number of nodes, rather than area?
>>>
>>>             Can you please let me know if I am mistaken in any of
>>>             these assumptions? Thanks in advance.
>>>
>>>             Anthony
>>>
>>>
>>>
>>>
>>>
>>> -- 
>>> Douglas N. Greve, Ph.D.
>>> MGH-NMR Center
>>> gr...@nmr.mgh.harvard.edu <mailto:gr...@nmr.mgh.harvard.edu>
>>> Phone Number: 617-724-2358 Fax: 617-726-7422
>>>
>>> Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting 
>>> <http://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting>
>>> FileDrop: www.nmr.mgh.harvard.edu/facility/filedrop/index.html 
>>> <http://www.nmr.mgh.harvard.edu/facility/filedrop/index.html>
>>>
>>> ------------------------------------------------------------------------ 
>>>
>>>
>>> _______________________________________________
>>> Freesurfer mailing list
>>> Freesurfer@nmr.mgh.harvard.edu
>>> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>>
>
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

Reply via email to