Hi Derek With the default burn-in period (b=1000 iterations), on a CPU cluster you should expect ~10 hours per subject. We are using a GPU cluster, so it takes us much less than that.
It’s unrealistic to proceed without any parallelisation or GPU acceleration (you would be looking into many days per subject). Cheers Stam On 27 Feb 2015, at 15:38, Archer,Derek B <arche...@ad.ufl.edu<mailto:arche...@ad.ufl.edu>> wrote: Hi Matt, What computational time do you typically expect for bedpostx to run on the diffusion data? (With no parallelization.) Thanks, Derek Archer From: Glasser, Matthew [mailto:glass...@wusm.wustl.edu] Sent: Tuesday, February 17, 2015 12:55 PM To: Archer,Derek B; Stamatios Sotiropoulos; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org> Subject: Re: [HCP-Users] Processing Diffusion MRI Data Are you running the command on ${StudyFolder}/${Subject}/T1w/Diffusion ? You’ll need the latest version of FSL 5.0.8 on your cluster as well. Peace, Matt. From: <Archer>, Derek B <arche...@ad.ufl.edu<mailto:arche...@ad.ufl.edu>> Date: Tuesday, February 17, 2015 at 11:53 AM To: Matt Glasser <glass...@wusm.wustl.edu<mailto:glass...@wusm.wustl.edu>>, Stamatios Sotiropoulos <stamatios.sotiropou...@ndcn.ox.ac.uk<mailto:stamatios.sotiropou...@ndcn.ox.ac.uk>>, "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>> Subject: RE: [HCP-Users] Processing Diffusion MRI Data Hi Matt, The exact command I’m using locally is (I tried this from Stam’s input): bedpostx <subject directory> –n 3 –model 2 –g –rician Subject directory contents: bvals bvecs data.nii.gz nodif_brain_mask.nii.gz grad_dev.nii.gz This works. How long do you estimate this will take? Because the –g option is not available on the cluster I am working on (not sure why this is the case..), and I think that may be causing the issue. Thanks, Derek From: Glasser, Matthew [mailto:glass...@wusm.wustl.edu] Sent: Tuesday, February 17, 2015 12:47 PM To: Archer,Derek B; Stamatios Sotiropoulos; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org> Subject: Re: [HCP-Users] Processing Diffusion MRI Data How about posting the exact command line you are using and a listing of the contents of <subject_directory> Peace, Matt. From: <Archer>, Derek B <arche...@ad.ufl.edu<mailto:arche...@ad.ufl.edu>> Date: Tuesday, February 17, 2015 at 9:20 AM To: Stamatios Sotiropoulos <stamatios.sotiropou...@ndcn.ox.ac.uk<mailto:stamatios.sotiropou...@ndcn.ox.ac.uk>>, "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>> Subject: Re: [HCP-Users] Processing Diffusion MRI Data Hi Matt and Stam, I seem to still be having some sort of issue running bedpostx. I’m using: bedpostx <subject_directory> --cnonlinear --rician --model=2 --nf=3. (The –g option is not available in my version of FSL (5.0.5). I am on a cluster environment.) So when I run this line of code, I get the following error: ** ERROR: nifti_convert_nhdr2nim: bad dim[0] ** ERROR: (nifti_image_read): cannot create nifty image from header ‘data’ ** ERROR: nifti_image_open (data): bad header info ** ERROR: failed to open file data The error seems to be with data.nii.gz, but is there error actually occurring because I am not feeding in grad_dev.nii.gz via the –g option? Do I need a newer version of FSL? Thanks, Derek Archer From: Stamatios Sotiropoulos [mailto:stamatios.sotiropou...@ndcn.ox.ac.uk] Sent: Monday, February 16, 2015 12:48 PM To: Archer,Derek B Cc: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org> Subject: Re: [HCP-Users] Processing Diffusion MRI Data Hi Derek Please notice that model=3 is not officially released yet in FSL. Also, there should be a future HCP release in the near future including bedpostx results, with the optimal parameters. Stam On 16 Feb 2015, at 16:25, Archer,Derek B <arche...@ad.ufl.edu<mailto:arche...@ad.ufl.edu>> wrote: Matt, Thanks for the fast reply. I will use these three inputs in bedpostX. Derek From: Glasser, Matthew [mailto:glass...@wusm.wustl.edu] Sent: Monday, February 16, 2015 11:24 AM To: Archer,Derek B; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org> Subject: Re: [HCP-Users] Processing Diffusion MRI Data Why do you need to use FLIRT/FNIRT with already registered data? As for the others and aside from the flags you already mention, I think the recommendation for bedpostX will include these flags: -n 3 (for 3 fibers) --cnonlinear --rician --model=3 Not all of this may yet be available in the public release (you might need to use --model=2 for now). Peace, Matt. From: <Archer>, Derek B <arche...@ad.ufl.edu<mailto:arche...@ad.ufl.edu>> Date: Monday, February 16, 2015 at 9:16 AM To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>> Subject: [HCP-Users] Processing Diffusion MRI Data Hello, I am trying to analyze the preprocessed diffusion data, however, I am having some difficulties. Is there some documentation that I could look at that outlines how to use bedpostx, FLIRT/FNIRT, DTIFIT and probtrackx with this data? >From what I’ve found in the archives, I need to do the following: Bedpostx: include the –g option FLIRT/FNIRT: do I include any extra options here? DTIFIT: include the –gradnonlin option Is this all that needs to be done to have this data ready for tractography? Or are there steps that I have missed? Please let me know if there is any documentation I have failed to locate. Thanks, Derek Archer _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org> http://lists.humanconnectome.org/mailman/listinfo/hcp-users ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org> http://lists.humanconnectome.org/mailman/listinfo/hcp-users _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org> http://lists.humanconnectome.org/mailman/listinfo/hcp-users ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org> http://lists.humanconnectome.org/mailman/listinfo/hcp-users _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org http://lists.humanconnectome.org/mailman/listinfo/hcp-users