Well, I guess that would be us every time we update our local installation.  
We’ll let you know how it goes
________________________
 gaurav patel
 gauravpa...@gmail.com
 pateldsclab.net

> On Mar 2, 2018, at 2:30 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> Hi Gaurav,
> 
> I guess they are supported on mac for now, but I don¹t know that anyone is
> actively testing them for macŠ
> 
> Matt.
> 
> On 3/2/18, 1:28 PM, "Gaurav Patel" <gaurav.pa...@gmail.com on behalf of
> gauravpa...@gmail.com> wrote:
> 
>> YesŠwe¹ve been 3.4 for many years without problems, and 3.22 only
>> required those minor changes.  We recently verified that it was easy to
>> install and run on macs when we expanded the network of mac pros we use
>> to run the HCP pipelines
>> ________________________
>> gaurav patel
>> gauravpa...@gmail.com
>> pateldsclab.net
>> 
>>> On Mar 2, 2018, at 2:25 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
>>> 
>>> So you are saying that the Pipelines work fine on the Mac for you aside
>>> from this cp issue?
>>> 
>>> Matt.
>>> 
>>> From: <hcp-users-boun...@humanconnectome.org> on behalf of "Sanchez,
>>> Juan (NYSPI)" <juan.sanc...@nyspi.columbia.edu>
>>> Date: Friday, March 2, 2018 at 1:24 PM
>>> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Subject: Re: [HCP-Users] HCP-Users Digest, Vol 64, Issue 2
>>> 
>>> Dear
>>> Kwan-Jin Jung
>>> 
>>> I had rthe same problems when I installed the 3_22 HCP Pipleines
>>> Here is what we did:
>>> 
>>> 
>>>     €
>>> syntax in
>>> scripts: 
>>> Just remove the "--preserve=timestamps" from all of the Pipeline
>>> scripts 
>>> 2. workbench:
>>> 
>>> Get the "dev_latest" zip file that says "mac64" from here:
>>> 
>>> http://brainvis.wustl.edu/workbench/
>>> 
>>> From: hcp-users-boun...@humanconnectome.org
>>> <hcp-users-boun...@humanconnectome.org> on behalf of
>>> hcp-users-requ...@humanconnectome.org
>>> <hcp-users-requ...@humanconnectome.org>
>>> Sent: Friday, March 2, 2018 1:00:01 PM
>>> To: hcp-users@humanconnectome.org
>>> Subject: HCP-Users Digest, Vol 64, Issue 2
>>> 
>>> ATTENTION: This email came from an external source. Do not open
>>> attachments or click on links from unknown senders or unexpected emails.
>>> 
>>> 
>>> Send HCP-Users mailing list submissions to
>>>        hcp-users@humanconnectome.org
>>> 
>>> To subscribe or unsubscribe via the World Wide Web, visit
>>>        http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> or, via email, send a message with subject or body 'help' to
>>>        hcp-users-requ...@humanconnectome.org
>>> 
>>> You can reach the person managing the list at
>>>        hcp-users-ow...@humanconnectome.org
>>> 
>>> When replying, please edit your Subject line so it is more specific
>>> than "Re: Contents of HCP-Users digest..."
>>> 
>>> 
>>> Today's Topics:
>>> 
>>>   1. Re: Best Approach for using old volumetric data to pick
>>>      parcels-of-interest (Stevens, Michael)
>>>   2. Movement regressor missing (Linnman, Clas,Ph.D.)
>>>   3. Re: Movement regressor missing (Glasser, Matthew)
>>>   4. Error in FreeSurfer processing "recon-all.v6.hires: command
>>>      not found" (Pubuditha Abeyasinghe)
>>>   5. Re: Error in FreeSurfer processing "recon-all.v6.hires:
>>>      command not found" (Glasser, Matthew)
>>>   6. Re: Error in FreeSurfer processing "recon-all.v6.hires:
>>>      command not found" (Timothy B. Brown)
>>>   7. Re: Best Approach for using old volumetric data to pick
>>>      parcels-of-interest (Timothy Coalson)
>>>   8. Re: ROI cluster centers to surface grayordinates (Timothy Coalson)
>>>   9. Re: Best Approach for using old volumetric data to pick
>>>      parcels-of-interest (Erin W. E. Dickie)
>>>  10. Re: Correct interpretation of NIH battery test
>>>      'Words-in-Noise' in HCP subjects (Robert Becker)
>>>  11. Split dtseries (A R)
>>>  12. Re: Split dtseries (Glasser, Matthew)
>>>  13. Change of FreeSurferHiresPial.sh for MacBook IOS (Kwan-Jin Jung)
>>>  14. Re: Change of FreeSurferHiresPial.sh for MacBook IOS
>>>      (Glasser, Matthew)
>>> 
>>> 
>>> ----------------------------------------------------------------------
>>> 
>>> Message: 1
>>> Date: Thu, 1 Mar 2018 19:22:32 +0000
>>> From: "Stevens, Michael" <michael.stev...@hhchealth.org>
>>> Subject: Re: [HCP-Users] Best Approach for using old volumetric data
>>>        to pick parcels-of-interest
>>> To: Timothy Coalson <tsc...@mst.edu>, "Glasser, Matthew"
>>>        <glass...@wustl.edu>
>>> Cc: "Erin W. E. Dickie" <erin.w.dic...@gmail.com>,
>>>        "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID:
>>> 
>>> <1cf06fac1128cf49ab6143a2d6a10f859345e...@hhcexchmb04.hhcsystem.org>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> Hi Tim,
>>> 
>>> Thanks.  That?s clear and sounds like a really reasonable approach.
>>> 
>>> Can you point me towards the exact files I?d need to reference and
>>> maybe suggest which function calls I?ll need to use to do the
>>> volume-to-surface mapping you describe?  I?ll whip up a quick script to
>>> loop through about 120 datasets from this R01 project and let you know
>>> how well it works.
>>> 
>>> Mike
>>> 
>>> 
>>> From: Timothy Coalson [mailto:tsc...@mst.edu]
>>> Sent: Friday, February 23, 2018 6:49 PM
>>> To: Glasser, Matthew
>>> Cc: Stevens, Michael; Erin W. E. Dickie; hcp-users@humanconnectome.org
>>> Subject: Re: [HCP-Users] Best Approach for using old volumetric data to
>>> pick parcels-of-interest
>>> 
>>> This is an email from Outside HHC. USE CAUTION opening attachments or
>>> links from unknown senders.
>>> 
>>> Surface-based methods may boost your statistical power enough (by
>>> better alignment, exclusion of irrelevant tissue, and smoothing that
>>> doesn't cross sulcal banks, if you decide you need smoothing) that you
>>> may not need to rely as much on existing ROIs.  Parcel-based statistics
>>> have a lot of power, because the multiple comparisons are orders of
>>> magnitude smaller, spatially independent noise averages out, and the
>>> signal averages together.  We believe that a lot of old data would
>>> benefit from reanalysis using surfaces.
>>> 
>>> However, our paper is mainly focused on specificity and continuous
>>> data.  If you have a binary volume ROI and you only need a rough guess
>>> of it on the surface, you can get approximate answers, in a way that
>>> should reduce false negatives (and give more false positives) from the
>>> surface/volume transition problems.  You can map the ROI to the
>>> anatomical MNI surfaces of a group of subjects, and take the max across
>>> subjects.  Each individual may miss the expected group ribbon location
>>> in any given location, but it is very likely that every point in the
>>> expected group ribbon location will overlap with at least one subject in
>>> the group.  If this isn't enough, you can dilate the volume ROI a few mm
>>> first.
>>> 
>>> Tim
>>> 
>>> 
>>> On Fri, Feb 23, 2018 at 11:18 AM, Glasser, Matthew
>>> <glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:
>>> Hi Mike,
>>> 
>>> We have a preprint out on this exact question and the conclusion is
>>> that it is really hard to do this accurately for most brain regions:
>>> 
>>> https://www.biorxiv.org/content/early/2018/01/29/255620
>>> 
>>> Really the best idea is probably to go back and reanalyze the old data
>>> without volume-based smoothing and aligned across surfaces.  Erin
>>> Dickie, CCed is working on tools to make this a little easier, but still
>>> there are issues like needing a field map to get accurate fMRI to
>>> structural registration.  The good news is that one?s statistical power
>>> should be much better if brains are actually lined up, and using
>>> parcellated analyses instead of smoothing offers further benefits.
>>> 
>>> Matt.
>>> 
>>> From: 
>>> <hcp-users-boun...@humanconnectome.org<mailto:hcp-users-bounces@humanconn
>>> ectome.org>> on behalf of "Stevens, Michael"
>>> <michael.stev...@hhchealth.org<mailto:michael.stev...@hhchealth.org>>
>>> Date: Friday, February 23, 2018 at 8:58 AM
>>> To: 
>>> "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"
>>> <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
>>> Subject: [HCP-Users] Best Approach for using old volumetric data to
>>> pick parcels-of-interest
>>> 
>>> Hi everyone,
>>> 
>>> There?s been a lot posted here over the past year or two on the
>>> challenges and limitations of going back-and-forth between volumetric
>>> space and HCP-defined surface space, with solid arguments for moving to
>>> (and sticking with) CIFTI-defined brainordinates.  Here, I?m asking a
>>> slightly different question? The field has decades of research using
>>> volume-space fMRI timeseries analyses that helps to define where to look
>>> in the brain to test new hypotheses.  Has anyone got a well-thought-out
>>> approach for mapping such volume-space ROIs to the parcels within the
>>> new HCP 180 atlas?  I ask because the specificity of the HCP atlas
>>> sometimes offers a half dozen candidate parcels for hypothesis-testing
>>> for what we previously thought of as just one or two regions.  Even
>>> though our group currently has a half dozen newer NIH-funded studies
>>> that use HCP compliant sequences, most of that work is still predicated
>>> on a ?region-of-interest? approach because the study groups sizes are
>>> less than a h
>>> undred, not in the thousands typical of the HCP grantwork.  So we
>>> still have to contend with the statistical power limitations inherent in
>>> any ROI approach.  It would be great to be able to use our prior
>>> volume-space data to have greater confidence in selecting among the
>>> various parcel-of-interest candidates when testing hypotheses.
>>> 
>>> I?m wondering if anyone?s yet worked out a step-by-step approach for a
>>> series of warps/surface-maps/transformations that can take ROIs from MNI
>>> space and give a ?best guess? as to which HCP 180 atlas parcel(s) should
>>> be queried in such instances.  It would be a nice bridge from older work
>>> to newer HCP-guided work, that would allow researchers to circumvent the
>>> added burden of having to go back and collect new pilot data using HCP
>>> sequences.  A thoughtful list of the analytic or conceptual pros/cons of
>>> something like this would be helpful as well.
>>> 
>>> Thanks,
>>> Mike
>>> 
>>> 
>>> This e-mail message, including any attachments, is for the sole use of
>>> the intended recipient(s) and may contain confidential and privileged
>>> information. Any unauthorized review, use, disclosure, or distribution
>>> is prohibited. If you are not the intended recipient, or an employee or
>>> agent responsible for delivering the message to the intended recipient,
>>> please contact the sender by reply e-mail and destroy all copies of the
>>> original message, including any attachments.
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> 
>>> 
>>> 
>>> Reminder: This e-mail and any attachments are subject to the current
>>> HHC email retention policies. Please save or store appropriately in
>>> accordance with policy.
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /d798ca00/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 2
>>> Date: Thu, 1 Mar 2018 20:02:08 +0000
>>> From: "Linnman, Clas,Ph.D." <clinn...@partners.org>
>>> Subject: [HCP-Users] Movement regressor missing
>>> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID: <14e25807-db1b-4132-ad77-1ff863216...@contoso.com>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> Hi,
>>> I downloaded the ?resting state fMRI FIX 1 Denoised (Extended)?
>>> datasets, but there are no Movement_Regressors.txt included in the
>>> unzipped folder.
>>> Are these not there for a reason? I can grab them from the other
>>> preprocessed datasets, but I was curious as to why they are missing, and
>>> if they would be identical to those in the ?Resting State fMRI 1
>>> Preprocessed? package
>>> 
>>> Best
>>> Clas
>>> 
>>> 
>>> The information in this e-mail is intended only for the person to whom
>>> it is
>>> addressed. If you believe this e-mail was sent to you in error and the
>>> e-mail
>>> contains patient information, please contact the Partners Compliance
>>> HelpLine at
>>> http://www.partners.org/complianceline . If the e-mail was sent to you
>>> in error
>>> but does not contain patient information, please contact the sender and
>>> properly
>>> dispose of the e-mail.
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /c9c590f9/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 3
>>> Date: Thu, 1 Mar 2018 20:05:01 +0000
>>> From: "Glasser, Matthew" <glass...@wustl.edu>
>>> Subject: Re: [HCP-Users] Movement regressor missing
>>> To: "Linnman, Clas,Ph.D." <clinn...@partners.org>,
>>>        "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID: <d6bdb8ec.1637d7%glass...@wustl.edu>
>>> Content-Type: text/plain; charset="windows-1252"
>>> 
>>> That?s right. To keep the packages smaller we tried not to repeat files.
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> From: 
>>> <hcp-users-boun...@humanconnectome.org<mailto:hcp-users-bounces@humanconn
>>> ectome.org>> on behalf of "Linnman, Clas,Ph.D."
>>> <clinn...@partners.org<mailto:clinn...@partners.org>>
>>> Date: Thursday, March 1, 2018 at 2:02 PM
>>> To: 
>>> "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"
>>> <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
>>> Subject: [HCP-Users] Movement regressor missing
>>> 
>>> Hi,
>>> I downloaded the ?resting state fMRI FIX 1 Denoised (Extended)?
>>> datasets, but there are no Movement_Regressors.txt included in the
>>> unzipped folder.
>>> Are these not there for a reason? I can grab them from the other
>>> preprocessed datasets, but I was curious as to why they are missing, and
>>> if they would be identical to those in the ?Resting State fMRI 1
>>> Preprocessed? package
>>> 
>>> Best
>>> Clas
>>> 
>>> The information in this e-mail is intended only for the person to whom
>>> it is
>>> addressed. If you believe this e-mail was sent to you in error and the
>>> e-mail
>>> contains patient information, please contact the Partners Compliance
>>> HelpLine at
>>> http://www.partners.org/complianceline . If the e-mail was sent to you
>>> in error
>>> but does not contain patient information, please contact the sender and
>>> properly
>>> dispose of the e-mail.
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /6c3b8429/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 4
>>> Date: Thu, 1 Mar 2018 20:09:05 +0000
>>> From: Pubuditha Abeyasinghe <pabey...@uwo.ca>
>>> Subject: [HCP-Users] Error in FreeSurfer processing
>>>        "recon-all.v6.hires: command not found"
>>> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID:
>>> 
>>> <ytxpr0101mb1711242b943ee17bf6d1c719af...@ytxpr0101mb1711.canprd01.prod.O
>>> UTLOOK.COM>
>>> 
>>> Content-Type: text/plain; charset="iso-8859-1"
>>> 
>>> Hi all,
>>> 
>>> I am very new to the HCP pipeline for preprocessing the data and I am
>>> trying to adopt to the pipeline.
>>> As the first step I am trying the execution of the pipeline with the
>>> example data that is given in the tutorial. The first part which is the
>>> PreFreeSurfer processing was completed successfully.
>>> 
>>> But I am having a problem with the second part, the FreeSurfer
>>> processing. When I start it, the process instantly comes to an end
>>> giving me the following error;
>>> 
>>> ~/Pipelines/FreeSurfer/FreeSurferPipeline.sh: line 41:
>>> recon-all.v6.hires: command not found
>>> 
>>> I double check the freesurfer installation and I source it as explained
>>> as well. Does this error has something to do with the installation? How
>>> can I fix it?
>>> 
>>> 
>>> Your help is much appreciated!
>>> 
>>> 
>>> Regards,
>>> Pubuditha
>>> 
>>> 
>>> [Western University]
>>> Pubuditha Abeyasinghe
>>> PhD Candidate
>>> Department of Physics and Astronomy
>>> Brain and Mind Institute
>>> Western University
>>> London, ON, Canada
>>> email: pabey...@uwo.ca
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /4c2b4aff/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 5
>>> Date: Thu, 1 Mar 2018 20:12:21 +0000
>>> From: "Glasser, Matthew" <glass...@wustl.edu>
>>> Subject: Re: [HCP-Users] Error in FreeSurfer processing
>>>        "recon-all.v6.hires: command not found"
>>> To: Pubuditha Abeyasinghe <pabey...@uwo.ca>,
>>>        "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID: <d6bdbab8.1637e5%glass...@wustl.edu>
>>> Content-Type: text/plain; charset="us-ascii"
>>> 
>>> We are working on a response to your question.
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> From: 
>>> <hcp-users-boun...@humanconnectome.org<mailto:hcp-users-bounces@humanconn
>>> ectome.org>> on behalf of Pubuditha Abeyasinghe
>>> <pabey...@uwo.ca<mailto:pabey...@uwo.ca>>
>>> Date: Thursday, March 1, 2018 at 2:09 PM
>>> To: 
>>> "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"
>>> <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
>>> Subject: [HCP-Users] Error in FreeSurfer processing
>>> "recon-all.v6.hires: command not found"
>>> 
>>> 
>>> Hi all,
>>> 
>>> I am very new to the HCP pipeline for preprocessing the data and I am
>>> trying to adopt to the pipeline.
>>> As the first step I am trying the execution of the pipeline with the
>>> example data that is given in the tutorial. The first part which is the
>>> PreFreeSurfer processing was completed successfully.
>>> 
>>> But I am having a problem with the second part, the FreeSurfer
>>> processing. When I start it, the process instantly comes to an end
>>> giving me the following error;
>>> 
>>> ~/Pipelines/FreeSurfer/FreeSurferPipeline.sh: line 41:
>>> recon-all.v6.hires: command not found
>>> 
>>> I double check the freesurfer installation and I source it as explained
>>> as well. Does this error has something to do with the installation? How
>>> can I fix it?
>>> 
>>> 
>>> Your help is much appreciated!
>>> 
>>> 
>>> Regards,
>>> Pubuditha
>>> 
>>> 
>>> [Western University]
>>> Pubuditha Abeyasinghe
>>> PhD Candidate
>>> Department of Physics and Astronomy
>>> Brain and Mind Institute
>>> Western University
>>> London, ON, Canada
>>> email: pabey...@uwo.ca<mailto:pabey...@uwo.ca>
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /4fad907d/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 6
>>> Date: Thu, 1 Mar 2018 14:30:18 -0600
>>> From: "Timothy B. Brown" <tbbr...@wustl.edu>
>>> Subject: Re: [HCP-Users] Error in FreeSurfer processing
>>>        "recon-all.v6.hires: command not found"
>>> To: hcp-users@humanconnectome.org, pabey...@uwo.ca
>>> Message-ID: <c2e47123-5ab6-a82c-7d3e-4f9f187ec...@wustl.edu>
>>> Content-Type: text/plain; charset="windows-1252"
>>> 
>>> Hi Pubuditha,
>>> 
>>> 
>>> I'm assuming that you are using the latest release of the HCP Pipelines
>>> (v3.25.0). The changes that are causing you a problem are part of those
>>> being made as part of conversion to using FreeSurfer version 6. Those
>>> changes should not have made it in to a release as of yet. I apologize
>>> for that mistake.
>>> 
>>> 
>>> Please try using version v3.24.0. I have briefly reviewed that version,
>>> and I believe that those FreeSurfer 6 related changes were not included
>>> in that release.
>>> 
>>> 
>>> In the meantime, I will back those changes out of version v3.25.0 and
>>> create a new "bug-fix" release (v3.25.1).
>>> 
>>> 
>>> Thank you for pointing this problem out so that others (hopefully) do
>>> not have to encounter it also.
>>> 
>>> 
>>> Best Regards,
>>> 
>>> 
>>> ? Tim
>>> 
>>> 
>>> On 03/01/2018 02:12 PM, Glasser, Matthew wrote:
>>>> We are working on a response to your question.
>>>> 
>>>> Peace,
>>>> 
>>>> Matt.
>>>> 
>>>> From: <hcp-users-boun...@humanconnectome.org
>>>> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Pubuditha
>>>> Abeyasinghe <pabey...@uwo.ca <mailto:pabey...@uwo.ca>>
>>>> Date: Thursday, March 1, 2018 at 2:09 PM
>>>> To: "hcp-users@humanconnectome.org
>>>> <mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org
>>>> <mailto:hcp-users@humanconnectome.org>>
>>>> Subject: [HCP-Users] Error in FreeSurfer processing
>>>> "recon-all.v6.hires: command not found"
>>>> 
>>>> Hi all,
>>>> 
>>>> I am very new to the HCP pipeline for preprocessing the data and I am
>>>> trying to adopt to the pipeline.
>>>> As the first step I am trying the execution of the pipeline with the
>>>> example data that is given in the tutorial. The first part which is
>>>> the PreFreeSurfer processing was completed successfully.
>>>> 
>>>> But I am having a problem with the second part, the FreeSurfer
>>>> processing. When I start it, the process instantly comes to an end
>>>> giving me the following error;
>>>> 
>>>> ~/Pipelines/FreeSurfer/FreeSurferPipeline.sh: line 41:
>>>> recon-all.v6.hires: command not found
>>>> 
>>>> I double check the freesurfer installation and I source it as
>>>> explained as well. Does this error has something to do with the
>>>> installation? How can I fix it?
>>>> 
>>>> 
>>>> Your help is much appreciated!
>>>> 
>>>> 
>>>> Regards,
>>>> Pubuditha
>>>> 
>>>> 
>>>> Western University
>>>> *Pubuditha Abeyasinghe*
>>>> PhD Candidate
>>>> Department of Physics and Astronomy
>>>> Brain and Mind Institute
>>>> Western University
>>>> London, ON, Canada
>>>> email: pabey...@uwo.ca <mailto:pabey...@uwo.ca>
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>> 
>>> --
>>> /Timothy B. Brown
>>> Business & Technology Application Analyst III
>>> Pipeline Developer (Connectome Coordination Facility)
>>> tbbrown(at)wustl.edu
>>> /
>>> ------------------------------------------------------------------------
>>> The material in this message is private and may contain Protected
>>> Healthcare Information (PHI). If you are not the intended recipient, be
>>> advised that any unauthorized use, disclosure, copying or the taking of
>>> any action in reliance on the contents of this information is strictly
>>> prohibited. If you have received this email in error, please immediately
>>> notify the sender via telephone or return mail.
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /abfce907/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 7
>>> Date: Thu, 1 Mar 2018 16:40:54 -0600
>>> From: Timothy Coalson <tsc...@mst.edu>
>>> Subject: Re: [HCP-Users] Best Approach for using old volumetric data
>>>        to pick parcels-of-interest
>>> To: "Stevens, Michael" <michael.stev...@hhchealth.org>
>>> Cc: "Erin W. E. Dickie" <erin.w.dic...@gmail.com>,
>>>        "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID:
>>> 
>>> <CAK_=tayd6uzegzd1qgpf6l238tsg4rqazqnh4t9qomt8yfx...@mail.gmail.com>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> The command to use is wb_command -volume-to-surface-mapping, the main
>>> surface should generally be the midthickness.  Using the
>>> -ribbon-constrained method will give you small positive values on the
>>> surface when the cortical ribbon just grazes your ROI, so it may be the
>>> thing to use (use pial and white surfaces as outer and inner).  You
>>> could
>>> use -trilinear which is simpler, but it has more risk of false
>>> negatives.
>>> You may need to experiment, I don't think the effectiveness of this
>>> kind of
>>> salvage effort has been explored much.
>>> 
>>> Tim
>>> 
>>> 
>>> On Thu, Mar 1, 2018 at 1:22 PM, Stevens, Michael <
>>> michael.stev...@hhchealth.org> wrote:
>>> 
>>>> Hi Tim,
>>>> 
>>>> 
>>>> 
>>>> Thanks.  That?s clear and sounds like a really reasonable approach.
>>>> 
>>>> 
>>>> 
>>>> Can you point me towards the exact files I?d need to reference and
>>> maybe
>>>> suggest which function calls I?ll need to use to do the
>>> volume-to-surface
>>>> mapping you describe?  I?ll whip up a quick script to loop through
>>> about
>>>> 120 datasets from this R01 project and let you know how well it works.
>>>> 
>>>> 
>>>> 
>>>> Mike
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *From:* Timothy Coalson [mailto:tsc...@mst.edu]
>>>> *Sent:* Friday, February 23, 2018 6:49 PM
>>>> *To:* Glasser, Matthew
>>>> *Cc:* Stevens, Michael; Erin W. E. Dickie;
>>> hcp-users@humanconnectome.org
>>>> *Subject:* Re: [HCP-Users] Best Approach for using old volumetric
>>> data to
>>>> pick parcels-of-interest
>>>> 
>>>> 
>>>> 
>>>> This is an email from Outside HHC. USE CAUTION opening attachments or
>>>> links from unknown senders.
>>>> 
>>>> Surface-based methods may boost your statistical power enough (by
>>> better
>>>> alignment, exclusion of irrelevant tissue, and smoothing that doesn't
>>> cross
>>>> sulcal banks, if you decide you need smoothing) that you may not need
>>> to
>>>> rely as much on existing ROIs.  Parcel-based statistics have a lot of
>>>> power, because the multiple comparisons are orders of magnitude
>>> smaller,
>>>> spatially independent noise averages out, and the signal averages
>>>> together.  We believe that a lot of old data would benefit from
>>> reanalysis
>>>> using surfaces.
>>>> 
>>>> 
>>>> 
>>>> However, our paper is mainly focused on specificity and continuous
>>> data.
>>>> If you have a binary volume ROI and you only need a rough guess of it
>>> on
>>>> the surface, you can get approximate answers, in a way that should
>>> reduce
>>>> false negatives (and give more false positives) from the
>>> surface/volume
>>>> transition problems.  You can map the ROI to the anatomical MNI
>>> surfaces of
>>>> a group of subjects, and take the max across subjects.  Each
>>> individual may
>>>> miss the expected group ribbon location in any given location, but it
>>> is
>>>> very likely that every point in the expected group ribbon location
>>> will
>>>> overlap with at least one subject in the group.  If this isn't
>>> enough, you
>>>> can dilate the volume ROI a few mm first.
>>>> 
>>>> 
>>>> 
>>>> Tim
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Fri, Feb 23, 2018 at 11:18 AM, Glasser, Matthew
>>> <glass...@wustl.edu>
>>>> wrote:
>>>> 
>>>> Hi Mike,
>>>> 
>>>> 
>>>> 
>>>> We have a preprint out on this exact question and the conclusion is
>>> that
>>>> it is really hard to do this accurately for most brain regions:
>>>> 
>>>> 
>>>> 
>>>> https://www.biorxiv.org/content/early/2018/01/29/255620
>>>> 
>>>> 
>>>> 
>>>> Really the best idea is probably to go back and reanalyze the old data
>>>> without volume-based smoothing and aligned across surfaces.  Erin
>>> Dickie,
>>>> CCed is working on tools to make this a little easier, but still
>>> there are
>>>> issues like needing a field map to get accurate fMRI to structural
>>>> registration.  The good news is that one?s statistical power should
>>> be much
>>>> better if brains are actually lined up, and using parcellated analyses
>>>> instead of smoothing offers further benefits.
>>>> 
>>>> 
>>>> 
>>>> Matt.
>>>> 
>>>> 
>>>> 
>>>> *From: *<hcp-users-boun...@humanconnectome.org> on behalf of "Stevens,
>>>> Michael" <michael.stev...@hhchealth.org>
>>>> *Date: *Friday, February 23, 2018 at 8:58 AM
>>>> *To: *"hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>>> *Subject: *[HCP-Users] Best Approach for using old volumetric data to
>>>> pick parcels-of-interest
>>>> 
>>>> 
>>>> 
>>>> Hi everyone,
>>>> 
>>>> 
>>>> 
>>>> There?s been a lot posted here over the past year or two on the
>>> challenges
>>>> and limitations of going back-and-forth between volumetric space and
>>>> HCP-defined surface space, with solid arguments for moving to (and
>>> sticking
>>>> with) CIFTI-defined brainordinates.  Here, I?m asking a slightly
>>> different
>>>> question? The field has decades of research using volume-space fMRI
>>>> timeseries analyses that helps to define where to look in the brain
>>> to test
>>>> new hypotheses.  Has anyone got a well-thought-out approach for
>>> mapping
>>>> such volume-space ROIs to the parcels within the new HCP 180 atlas?
>>> I ask
>>>> because the specificity of the HCP atlas sometimes offers a half dozen
>>>> candidate parcels for hypothesis-testing for what we previously
>>> thought of
>>>> as just one or two regions.  Even though our group currently has a
>>> half
>>>> dozen newer NIH-funded studies that use HCP compliant sequences, most
>>> of
>>>> that work is still predicated on a ?region-of-interest? approach
>>> because
>>>> the study groups sizes are less than a hundred, not in the thousands
>>>> typical of the HCP grantwork.  So we still have to contend with the
>>>> statistical power limitations inherent in any ROI approach.  It would
>>> be
>>>> great to be able to use our prior volume-space data to have greater
>>>> confidence in selecting among the various parcel-of-interest
>>> candidates
>>>> when testing hypotheses.
>>>> 
>>>> 
>>>> 
>>>> I?m wondering if anyone?s yet worked out a step-by-step approach for a
>>>> series of warps/surface-maps/transformations that can take ROIs from
>>> MNI
>>>> space and give a ?best guess? as to which HCP 180 atlas parcel(s)
>>> should be
>>>> queried in such instances.  It would be a nice bridge from older work
>>> to
>>>> newer HCP-guided work, that would allow researchers to circumvent the
>>> added
>>>> burden of having to go back and collect new pilot data using HCP
>>>> sequences.  A thoughtful list of the analytic or conceptual pros/cons
>>> of
>>>> something like this would be helpful as well.
>>>> 
>>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Mike
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *This e-mail message, including any attachments, is for the sole use
>>> of
>>>> the intended recipient(s) and may contain confidential and privileged
>>>> information. Any unauthorized review, use, disclosure, or
>>> distribution is
>>>> prohibited. If you are not the intended recipient, or an employee or
>>> agent
>>>> responsible for delivering the message to the intended recipient,
>>> please
>>>> contact the sender by reply e-mail and destroy all copies of the
>>> original
>>>> message, including any attachments. *
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *Reminder: This e-mail and any attachments are subject to the current
>>> HHC
>>>> email retention policies. Please save or store appropriately in
>>> accordance
>>>> with policy. *
>>>> 
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /b0aca16d/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 8
>>> Date: Thu, 1 Mar 2018 17:08:39 -0600
>>> From: Timothy Coalson <tsc...@mst.edu>
>>> Subject: Re: [HCP-Users] ROI cluster centers to surface grayordinates
>>> To: Manasij Venkatesh <mana...@umd.edu>
>>> Cc: hcp-users@humanconnectome.org
>>> Message-ID:
>>> 
>>> <CAK_=tawZFTcB-L2KUw1A--=ouz-+pbuq5zd4wdbwl-3d5hg...@mail.gmail.com>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> Unfortunately, it is worse than that - even ignoring the individual
>>> variability issue (which should not be ignored), a small change in MNI
>>> coordinate can jump from one bank of a sulcus to the other, so having
>>> only
>>> the center coordinate of a cluster makes this a badly posed problem (and
>>> thus explains why peak or center of gravity MNI coordinate reporting is
>>> not
>>> nearly as useful as actual data files).
>>> 
>>> Coordinates of each vertex are contained in .surf.gii files, but
>>> different
>>> files can represent different depths of cortex (gray/white boundary
>>> ("white"), csf/gray boundary ("pial"), halfway between ("midthickness),
>>> etc), or other things entirely (inflated, sphere).  What you could do to
>>> get an idea of the problem, is to take a group of subjects, get the 
>>> vertex
>>> coordinates of the midthickness surface, and compute the euclidean 
>>> distance
>>> to each of your cluster centers.  If you average these across subjects, 
>>> I
>>> suspect you will generally end up with 2 similarly low-distance spots 
>>> for
>>> each cluster center, on opposite sides of a sulcus (for consistent 
>>> sulci,
>>> anyway - if it is in or near a high-folding-variability region, one of 
>>> the
>>> two low spots may get blurred out of existence (or the two may be 
>>> blurred
>>> into one larger spot) because the folding patterns don't align, but it 
>>> is
>>> the folding patterns that determine where the MNI coordinate is close to
>>> cortex).  To make it easier to see this issue, you could transform these
>>> distances into a gaussian kernel (matching the size of the original 
>>> cluster
>>> if you know it, or the amount of smoothing that was used, or...) and 
>>> then
>>> average that across subjects.
>>> 
>>> For a less-rigorous answer, there is a -surface-closest-vertex command 
>>> in
>>> wb_command that will simply find the closest vertex on the given surface
>>> (you will need to separate your right and left coordinates), but as this
>>> demonstration should show, MNI coordinate data is generally ambiguous to
>>> begin with.  Also note that group-average surfaces are missing a lot of
>>> folding detail that individual subjects have, and while that may
>>> incidentally make "closest vertex" more stable, it doesn't imply that it
>>> makes the answer more correct.
>>> 
>>> Tim
>>> 
>>> 
>>> On Thu, Mar 1, 2018 at 9:32 AM, Manasij Venkatesh <mana...@umd.edu> 
>>> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I have a set of ROI cluster center coordinates in MNI space. My goal 
>>> is to
>>>> create similar ROI clusters to use with HCP data. I understand 
>>> there's no
>>>> true equivalent in terms of surface grayordinates but what would be 
>>> the
>>>> best way to find their approximate position on the surface? Is this 
>>> the
>>>> information contained in the surf.gii files? Please let me know.
>>>> 
>>>> Sincerely,
>>>> Manasij
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /125e4d25/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 9
>>> Date: Thu, 1 Mar 2018 17:02:04 -0500
>>> From: "Erin W. E. Dickie" <erin.w.dic...@gmail.com>
>>> Subject: Re: [HCP-Users] Best Approach for using old volumetric data
>>>        to pick parcels-of-interest
>>> To: "Stevens, Michael" <michael.stev...@hhchealth.org>
>>> Cc: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Message-ID:
>>> 
>>> <CADig-n81fqn6z1cc_jnE6mizX++ViKHrbmsx=iwtnweywed...@mail.gmail.com>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> Hey Mike,
>>> 
>>> Here's the link to the tools Matt was referring to in the last email.
>>> https://edickie.github.io/ciftify
>>> 
>>> I think I'm still a little confused about your exact problem. What do 
>>> you
>>> need to map? Do you already have surfaces for all your participants in
>>> GIFTI format?
>>> 
>>> Thanks,
>>> Erin
>>> 
>>> On Thu, Mar 1, 2018 at 2:22 PM, Stevens, Michael <
>>> michael.stev...@hhchealth.org> wrote:
>>> 
>>>> Hi Tim,
>>>> 
>>>> 
>>>> 
>>>> Thanks.  That?s clear and sounds like a really reasonable approach.
>>>> 
>>>> 
>>>> 
>>>> Can you point me towards the exact files I?d need to reference and 
>>> maybe
>>>> suggest which function calls I?ll need to use to do the 
>>> volume-to-surface
>>>> mapping you describe?  I?ll whip up a quick script to loop through 
>>> about
>>>> 120 datasets from this R01 project and let you know how well it works.
>>>> 
>>>> 
>>>> 
>>>> Mike
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *From:* Timothy Coalson [mailto:tsc...@mst.edu]
>>>> *Sent:* Friday, February 23, 2018 6:49 PM
>>>> *To:* Glasser, Matthew
>>>> *Cc:* Stevens, Michael; Erin W. E. Dickie; 
>>> hcp-users@humanconnectome.org
>>>> *Subject:* Re: [HCP-Users] Best Approach for using old volumetric 
>>> data to
>>>> pick parcels-of-interest
>>>> 
>>>> 
>>>> 
>>>> This is an email from Outside HHC. USE CAUTION opening attachments or
>>>> links from unknown senders.
>>>> 
>>>> Surface-based methods may boost your statistical power enough (by 
>>> better
>>>> alignment, exclusion of irrelevant tissue, and smoothing that doesn't 
>>> cross
>>>> sulcal banks, if you decide you need smoothing) that you may not need 
>>> to
>>>> rely as much on existing ROIs.  Parcel-based statistics have a lot of
>>>> power, because the multiple comparisons are orders of magnitude 
>>> smaller,
>>>> spatially independent noise averages out, and the signal averages
>>>> together.  We believe that a lot of old data would benefit from 
>>> reanalysis
>>>> using surfaces.
>>>> 
>>>> 
>>>> 
>>>> However, our paper is mainly focused on specificity and continuous 
>>> data.
>>>> If you have a binary volume ROI and you only need a rough guess of it 
>>> on
>>>> the surface, you can get approximate answers, in a way that should 
>>> reduce
>>>> false negatives (and give more false positives) from the 
>>> surface/volume
>>>> transition problems.  You can map the ROI to the anatomical MNI 
>>> surfaces of
>>>> a group of subjects, and take the max across subjects.  Each 
>>> individual may
>>>> miss the expected group ribbon location in any given location, but it 
>>> is
>>>> very likely that every point in the expected group ribbon location 
>>> will
>>>> overlap with at least one subject in the group.  If this isn't 
>>> enough, you
>>>> can dilate the volume ROI a few mm first.
>>>> 
>>>> 
>>>> 
>>>> Tim
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Fri, Feb 23, 2018 at 11:18 AM, Glasser, Matthew 
>>> <glass...@wustl.edu>
>>>> wrote:
>>>> 
>>>> Hi Mike,
>>>> 
>>>> 
>>>> 
>>>> We have a preprint out on this exact question and the conclusion is 
>>> that
>>>> it is really hard to do this accurately for most brain regions:
>>>> 
>>>> 
>>>> 
>>>> https://www.biorxiv.org/content/early/2018/01/29/255620
>>>> 
>>>> 
>>>> 
>>>> Really the best idea is probably to go back and reanalyze the old data
>>>> without volume-based smoothing and aligned across surfaces.  Erin 
>>> Dickie,
>>>> CCed is working on tools to make this a little easier, but still 
>>> there are
>>>> issues like needing a field map to get accurate fMRI to structural
>>>> registration.  The good news is that one?s statistical power should 
>>> be much
>>>> better if brains are actually lined up, and using parcellated analyses
>>>> instead of smoothing offers further benefits.
>>>> 
>>>> 
>>>> 
>>>> Matt.
>>>> 
>>>> 
>>>> 
>>>> *From: *<hcp-users-boun...@humanconnectome.org> on behalf of "Stevens,
>>>> Michael" <michael.stev...@hhchealth.org>
>>>> *Date: *Friday, February 23, 2018 at 8:58 AM
>>>> *To: *"hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>>> *Subject: *[HCP-Users] Best Approach for using old volumetric data to
>>>> pick parcels-of-interest
>>>> 
>>>> 
>>>> 
>>>> Hi everyone,
>>>> 
>>>> 
>>>> 
>>>> There?s been a lot posted here over the past year or two on the 
>>> challenges
>>>> and limitations of going back-and-forth between volumetric space and
>>>> HCP-defined surface space, with solid arguments for moving to (and 
>>> sticking
>>>> with) CIFTI-defined brainordinates.  Here, I?m asking a slightly 
>>> different
>>>> question? The field has decades of research using volume-space fMRI
>>>> timeseries analyses that helps to define where to look in the brain 
>>> to test
>>>> new hypotheses.  Has anyone got a well-thought-out approach for 
>>> mapping
>>>> such volume-space ROIs to the parcels within the new HCP 180 atlas?  
>>> I ask
>>>> because the specificity of the HCP atlas sometimes offers a half dozen
>>>> candidate parcels for hypothesis-testing for what we previously 
>>> thought of
>>>> as just one or two regions.  Even though our group currently has a 
>>> half
>>>> dozen newer NIH-funded studies that use HCP compliant sequences, most 
>>> of
>>>> that work is still predicated on a ?region-of-interest? approach 
>>> because
>>>> the study groups sizes are less than a hundred, not in the thousands
>>>> typical of the HCP grantwork.  So we still have to contend with the
>>>> statistical power limitations inherent in any ROI approach.  It would 
>>> be
>>>> great to be able to use our prior volume-space data to have greater
>>>> confidence in selecting among the various parcel-of-interest 
>>> candidates
>>>> when testing hypotheses.
>>>> 
>>>> 
>>>> 
>>>> I?m wondering if anyone?s yet worked out a step-by-step approach for a
>>>> series of warps/surface-maps/transformations that can take ROIs from 
>>> MNI
>>>> space and give a ?best guess? as to which HCP 180 atlas parcel(s) 
>>> should be
>>>> queried in such instances.  It would be a nice bridge from older work 
>>> to
>>>> newer HCP-guided work, that would allow researchers to circumvent the 
>>> added
>>>> burden of having to go back and collect new pilot data using HCP
>>>> sequences.  A thoughtful list of the analytic or conceptual pros/cons 
>>> of
>>>> something like this would be helpful as well.
>>>> 
>>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Mike
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *This e-mail message, including any attachments, is for the sole use 
>>> of
>>>> the intended recipient(s) and may contain confidential and privileged
>>>> information. Any unauthorized review, use, disclosure, or 
>>> distribution is
>>>> prohibited. If you are not the intended recipient, or an employee or 
>>> agent
>>>> responsible for delivering the message to the intended recipient, 
>>> please
>>>> contact the sender by reply e-mail and destroy all copies of the 
>>> original
>>>> message, including any attachments. *
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> *Reminder: This e-mail and any attachments are subject to the current 
>>> HHC
>>>> email retention policies. Please save or store appropriately in 
>>> accordance
>>>> with policy. *
>>>> 
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180301
>>> /386b7e49/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 10
>>> Date: Fri, 2 Mar 2018 09:11:39 +0100
>>> From: Robert Becker <em...@robertbecker.info>
>>> Subject: Re: [HCP-Users] Correct interpretation of NIH battery test
>>>        'Words-in-Noise' in HCP subjects
>>> To: "Elam, Jennifer" <e...@wustl.edu>,  "hcp-users@humanconnectome.org"
>>>        <hcp-users@humanconnectome.org>
>>> Message-ID: <1bebeeb6-ff94-2473-d78a-b7860b5a0...@robertbecker.info>
>>> Content-Type: text/plain; charset="windows-1252"
>>> 
>>> Hi, Jennifer,
>>> 
>>> That's great, thanks for the clarification!
>>> 
>>> 
>>> Best,
>>> Robert
>>> 
>>> Am 01/03/2018 um 18:26 schrieb Elam, Jennifer:
>>>> 
>>>> Hi?Robert,
>>>> 
>>>> Thank you for pointing us to this problem. The HCP Words in Noise
>>>> score?is indeed the NIH Toolbox Words in Noise (WIN)?Test computed
>>>> score, rather than the Toolbox Hearing Threshold Test that was
>>>> erroneously used for?the description in the Data Dictionary. We will
>>>> fix the data dictionary with the upcoming data release of the
>>>> corrected 7T fMRI data slated to occur within the month.
>>>> 
>>>> 
>>>> Here's the full description for the Words in Noise measure from the
>>>> NIH Toolbox Interpretation Guide from the 2012 version we used for 
>>> HCP:
>>>> 
>>>> 
>>>> NIH Toolbox Words-in-Noise Test (WIN)Description:
>>>> This test measures a person?s ability to recognize single words
>>>> presented amid varying levels of background noise. It measures how
>>>> much difficulty a person might have hearing in a noisy environment. A
>>>> recorded voice instructs the participant to listen to and then repeat
>>>> words. The task becomes increasingly difficult as the background noise
>>>> gets louder, thus reducing the signal-to-noise ratio. The test is
>>>> recommended for participants ages 6-85 and takes approximately six
>>>> minutes to administer.
>>>> Scoring Process: The examiner scores the participant?s responses as
>>>> correct or incorrect, and a total raw score (out of a maximum of 35
>>>> points) is calculated by the software for each ear. A percent correct
>>>> is calculated, which is then translated into a threshold score for
>>>> each ear, in decibels of signal-to-noise ratio (dB S/N), using a
>>>> look-up table (see Appendix C). Alternatively, the following equation
>>>> can be used to calculate the S/N score based on the raw score, in lieu
>>>> of the look-up table. For each ear:WIN_Score = 26-0.8*WIN_NCorrect
>>>> Thus, the best score that can be attained (35 correct) for either ear
>>>> is -2.0 dB S/N, and the worst score (0correct) is 26.0 dB S/N. Lower
>>>> scores, therefore, are indicative of better performance on this test.
>>>> In the Toolbox Assessment Scores output file, the score for the better
>>>> ear is provided in the Computed Score column.
>>>> Interpretation: Assessment of the ability to understand speech in a
>>>> noisy background yields an ecologically valid measure of hearing
>>>> because a substantial portion of communication in the real world
>>>> occurs in less-than-ideal environments. Moreover, speech perception in
>>>> noise is often difficult to predict from pure-tone thresholds or from
>>>> speech perception in quiet settings. The NIH Toolbox version of the
>>>> Words-in-Noise Test is newly released, so the interpretive guidelines
>>>> provided are preliminary and may need further adjustment as future
>>>> studies are conducted.As noted above, the range of possible scores for
>>>> each ear is -2.0 to 26.0 dB S/N, with lower scores indicative of
>>>> better performance and, conversely, higher scores potentially
>>>> suggestive of hearing difficulties. For score interpretation with ages
>>>> 13 and above, a cutoff of 10 dB S/N is recommended for the Toolbox
>>>> version of this measure. Participants with a score higher than this
>>>> cutoff should follow up with a hearing professional, specifically an
>>>> otolaryngologist, who would then refer to an audiologist as needed.
>>>> Users should note that the cutoff suggested here is slightly higher
>>>> than other published versions of this test because other versions were
>>>> conducted in quieter environments.
>>>> 
>>>> Again, sorry for the oversight. Let me know if you have further 
>>> questions.
>>>> 
>>>> Best,
>>>> Jenn
>>>> 
>>>> 
>>>> Jennifer Elam, Ph.D.
>>>> Scientific Outreach, Human Connectome Project
>>>> Washington University School of Medicine
>>>> Department of Neuroscience, Box 8108
>>>> 660 South Euclid Avenue
>>>> St. Louis, MO 63110
>>>> 314-362-9387<tel:314-362-9387>
>>>> e...@wustl.edu<mailto:e...@wustl.edu>
>>>> www.humanconnectome.org<http://www.humanconnectome.org/>
>>>> 
>>>> 
>>>> 
>>> ------------------------------------------------------------------------
>>>> *From:* hcp-users-boun...@humanconnectome.org
>>>> <hcp-users-boun...@humanconnectome.org> on behalf of Robert Becker
>>>> <em...@robertbecker.info>
>>>> *Sent:* Thursday, March 1, 2018 8:41:06 AM
>>>> *To:* hcp-users@humanconnectome.org
>>>> *Subject:* [HCP-Users] Correct interpretation of NIH battery test
>>>> 'Words-in-Noise' in HCP subjects
>>>> 
>>>> Dear all,
>>>> 
>>>> we have trouble understanding what the above test actually tests in
>>>> the context of HCP data. Despite its suggestive name, this test is
>>>> described (in the updated HCP Data Dictionary and its previous
>>>> version), as a pure-tone thresholding test that seems to have nothing
>>>> to do with understanding words embedded in noise or any similar 
>>> scenario.
>>>> 
>>>> The description in the Data Dictionary is pretty clear and excludes
>>>> any such interpretation, it is just that the naming seems confusing
>>>> and also, there actually is a NIH toolbox test called
>>>> '"Words-in-Noise" that does test how subjects comprehend one-syllable
>>>> words.
>>>> 
>>>> 
>>>> Can anyone comment on the exact nature of this test and help us out?
>>>> 
>>>> Thanks for your help!
>>>> 
>>>> Robert
>>>> --
>>>> Robert Becker, PhD
>>>> Universit?t Z?rich
>>>> Psychologisches Institut
>>>> Binzm?hlestrasse 14
>>>> 8050 Z?rich
>>>> 
>>>> Tel: +41 44 63 57234
>>>> em...@robertbecker.info <mailto:em...@robertbecker.info>
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>> 
>>> 
>>> --
>>> Robert Becker, PhD
>>> Universit?t Z?rich
>>> Psychologisches Institut
>>> Binzm?hlestrasse 14
>>> 8050 Z?rich
>>> 
>>> Tel: +41 44 63 57234
>>> em...@robertbecker.info
>>> 
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20180302
>>> /ad320ef0/attachment-0001.html
>>> 
>>> ------------------------------
>>> 
>>> Message: 11
>>> Date: Fri, 2 Mar 2018 13:47:12 +0100
>>> From: A R <aruls...@gmail.com>
>>> Subject: [HCP-Users] Split dtseries
>>> To: hcp-users@humanconnectome.org
>>> Message-ID: <4e282581-45a1-47b5-a40d-a7cdf679a...@gmail.com>
>>> Content-Type: text/plain;       charset=us-ascii
>>> 
>>> Dear HCP users,
>>> 
>>> How to split/trim a dtseries or ptseries file by number of time points? 
>>> Basically an equivalent of fslroi...
>>> 
>>> Thanks for any info
>>> 
>>> 
>>> ------------------------------
>>> 
>>> Message: 12
>>> Date: Fri, 2 Mar 2018 14:39:40 +0000
>>> From: "Glasser, Matthew" <glass...@wustl.edu>
>>> Subject: Re: [HCP-Users] Split dtseries
>>> To: A R <aruls...@gmail.com>, "hcp-users@humanconnectome.org"
>>>        <hcp-users@humanconnectome.org>
>>> Message-ID: <d6bebe37.163961%glass...@wustl.edu>
>>> Content-Type: text/plain; charset="us-ascii"
>>> 
>>> wb_command -cifti-merge contains this functionality, rather than a
>>> separate command.
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> On 3/2/18, 6:47 AM, "hcp-users-boun...@humanconnectome.org on behalf of 
>>> A
>>> R" <hcp-users-boun...@humanconnectome.org on behalf of 
>>> aruls...@gmail.com>
>>> wrote:
>>> 
>>>> Dear HCP users,
>>>> 
>>>> How to split/trim a dtseries or ptseries file by number of time points?
>>>> Basically an equivalent of fslroi...
>>>> 
>>>> Thanks for any info
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> 
>>> 
>>> 
>>> 
>>> ------------------------------
>>> 
>>> Message: 13
>>> Date: Fri, 2 Mar 2018 16:13:16 +0000
>>> From: Kwan-Jin Jung <kjj...@umass.edu>
>>> Subject: [HCP-Users] Change of FreeSurferHiresPial.sh for MacBook IOS
>>> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> Cc: Hae-Min Jung <hae-min.j...@austenriggs.net>
>>> Message-ID: <df573886-ad6b-437e-9499-6a9c8283e...@umass.edu>
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> Hi,
>>> 
>>> After I failed to run FreeSurfer pipeline on both Ubuntu and Centos, I 
>>> tried it on MacBook.
>>> 
>>> It runs without library issues, but it seems to have differences in 
>>> shell scripting commands.
>>> 
>>> The first example is ?cp? which needs to be replaced into ?rsync? in 
>>> both FreeSurferHiresWhite.sh and FreeSurferHiresPial.sh.
>>> 
>>> Then there was another issue in FreeSurferHiresPial.sh at the line #92 
>>> of:
>>> "${CARET7DIR}/wb_command -set-structure "$surfdir"/lh.white.surf.gii 
>>> CORTEX_LEFT"
>>> 
>>> I converted this into:
>>> "open -a ${CARET7DIR}/wb_command -set-structure 
>>> "$surfdir"/lh.white.surf.gii CORTEX_LEFT"
>>> 
>>> However, I am getting an error of:
>>> "The file /Users/kjjung/MRI/HCP/Practice/CORTEX_LEFT does not exist."
>>> 
>>> Does any one know how to fix this issue?
>>> 
>>> My MacBook is macOS High Sierra, v10.13.3.
>>> 
>>> Kwan-Jin Jung
>>> UMASS Amherst
>>> 
>>> 
>>> 
>>> ------------------------------
>>> 
>>> Message: 14
>>> Date: Fri, 2 Mar 2018 16:16:02 +0000
>>> From: "Glasser, Matthew" <glass...@wustl.edu>
>>> Subject: Re: [HCP-Users] Change of FreeSurferHiresPial.sh for MacBook
>>>        IOS
>>> To: Kwan-Jin Jung <kjj...@umass.edu>, "hcp-users@humanconnectome.org"
>>>        <hcp-users@humanconnectome.org>
>>> Cc: Hae-Min Jung <hae-min.j...@austenriggs.net>
>>> Message-ID: <d6bed4a4.1639e4%glass...@wustl.edu>
>>> Content-Type: text/plain; charset="Windows-1252"
>>> 
>>> I?m not sure we support the HCP Pipelines on Mac OS X.  You definitely
>>> need to be using a bash interpreter.
>>> 
>>> What issues were you having on Ubuntu?
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> On 3/2/18, 10:13 AM, "hcp-users-boun...@humanconnectome.org on behalf of
>>> Kwan-Jin Jung" <hcp-users-boun...@humanconnectome.org on behalf of
>>> kjj...@umass.edu> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> After I failed to run FreeSurfer pipeline on both Ubuntu and Centos, I
>>>> tried it on MacBook.
>>>> 
>>>> It runs without library issues, but it seems to have differences in 
>>> shell
>>>> scripting commands.
>>>> 
>>>> The first example is ?cp? which needs to be replaced into ?rsync? in 
>>> both
>>>> FreeSurferHiresWhite.sh and FreeSurferHiresPial.sh.
>>>> 
>>>> Then there was another issue in FreeSurferHiresPial.sh at the line #92 
>>> of:
>>>> "${CARET7DIR}/wb_command -set-structure "$surfdir"/lh.white.surf.gii
>>>> CORTEX_LEFT"
>>>> 
>>>> I converted this into:
>>>> "open -a ${CARET7DIR}/wb_command -set-structure
>>>> "$surfdir"/lh.white.surf.gii CORTEX_LEFT"
>>>> 
>>>> However, I am getting an error of:
>>>> "The file /Users/kjjung/MRI/HCP/Practice/CORTEX_LEFT does not exist."
>>>> 
>>>> Does any one know how to fix this issue?
>>>> 
>>>> My MacBook is macOS High Sierra, v10.13.3.
>>>> 
>>>> Kwan-Jin Jung
>>>> UMASS Amherst
>>>> 
>>>> _______________________________________________
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> 
>>> 
>>> 
>>> 
>>> ------------------------------
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> 
>>> 
>>> End of HCP-Users Digest, Vol 64, Issue 2
>>> ****************************************
>>> 
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> 
> 


_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to