Re: [HCP-Users] mris_mask_surfaces missing shared library

2014-10-23 Thread Timothy B. Brown

Hi Joe,

The mris_make_surfaces binary in the HCP custom version of FreeSurfer
(5.3.0-HCP) was linked using shared libraries. (My understanding is that
the mris_make_surfaces binary in other releases of FreeSurfer is linked
to static libraries so that the library code is included in the
resulting binary and thus the library does not need to be installed on
your system or located at run-time.) The mris_make_surfaces binary is
looking for a specific version (version 6) of a shared library that it
needs: the NetCDF (Network Common Data Form) library. You may have an
older version of the NetCDF shared library installed on your system
(e.g. libnetcdf.so.4), or even possibly a newer version of the NetCDF
installed on your system (e.g. libnetcdf.so.7). Many binaries that use
the NetCDF shared library will work fine because they are linked in such
a way as to use the latest version of the shared library that you have
installed. But in this case, mris_make_surfaces is trying to locate a
very specific version of the library.


You (or your friendly, neighborhood systems administrator) will need to
make sure libnetcf.so.6 is installed in a standard library directory
location or in a directory referenced by the LD_LIBRARY_PATH environment
variable. It would be preferable to install it in one of the standard
library locations instead of using LD_LIBRARY_PATH to locate it. The
standard locations may depend upon your exact platform (Ubuntu, Red Hat,
Cent OS, etc.) but generally are:

 * /lib
 * /lib64
 * /usr/lib
 * /usr/lib64
 * /usr/local/lib
 * /usr/local/lib64 Unless you really, really know what you're doing,
   installing a library in /lib or /lib64 is not usually a good idea. I
   think you'll find that /usr/lib64 is probably your best bet. Also,
   you will probably actually end up having libnetcdf.so.6.0.0
   installed and have libnetcdf.so.6 be a symbolic link to the
   libnetcdf.so.6.0.0 file.

If you have problems finding the shared library version libnetcf.so.6
for your platform, please email back, and I'll see if our system
administrators can help you locate the appropriate file and get it
installed.

As for the impacts of on your results of using release 5.3.0 instead of
5.3.0-HCP...changes to the mris_make_surfaces program were necessary to
better handle the HCP data. It was these changes to the
mris_make_surface program that actually prompted the creation of the
5.3.0-HCP version of FreeSurfer. These changes involved correct
location of the pial surface. There are others who would be much better
able to explain the differences between pial surface location using
FreeSurfer 5.3.0 versus using FreeSurfer 5.3.0-HCP than I would. If you
would like that better explanation, I suggest you re-post your question
to this hcp-users@humanconnectome.org list as a question about pial
surface location.

Hope that's helpful,

Tim


On Thu, Oct 23, 2014, at 11:06, Joseph Orr wrote:
> Hi, I’m getting the below missing shared libraries error from
> mris_make_surfaces with Freesurfer-5.3.0-HCP. Is it possible that I am
> missing a required package/ file? I was able to run
> FreeSurferPipeline.sh without error using the release version of
> FS-5.3.0. What are the impacts on my results using release 5.3.0 and
> the HCP version? I am running the pipeline on my own data from a
> functional study.
>
> Thanks, Joe
>
>> mris_make_surfaces -noaparc -mgz -T1 brain.finalsurfs 2011 lh
>>
>> mris_make_surfaces: error while loading shared libraries:
>> libnetcdf.so.6: cannot open shared object file: No such file or
>> directory
>
>
> --
> Joseph M. Orr, PhD. Postdoctoral Fellow Institute of Cognitive Science
> 594 UCB - CINC 182D University of Colorado at Boulder Boulder, CO
> 80309-0594
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] error when processing more than 2 tasks with GenericfMRIVolume script

2014-12-10 Thread Timothy B. Brown


On 09 Dec 2014 at 12:29, "Book, Gregory"
 wrote:
>
> We’ve run into a weird problem when using the GenericfMRIVolume batch
> script to process more than 2 tasks at a time. When we setup any more
> than 2 tasks (we’ve tried 3 and 4 tasks), the first 2 run correctly
> and without error, but any after that crash in the TopUp section with
> the following error (the failed tasks all get to the same point): ...

On 09 Dec 2014 at 15:11, "Xu, Junqian"  wrote:
>
> After a second thought, maybe convert the list to array specifically
> for the comparison purpose so the rest of the pipeline script don't
> need to be modified to use array indexing. 
> ...

Gordon and Gregory,

Comments indicating that the Tasklist and PhaseEncodinglist should have
the same number of space-delimited elements, and a test to verify that
this is true before starting the main processing loop have been added to
the GenericfMRIVolumeProcessingPipelineBatch.sh script that is in the
example scripts directory.

This change will be included in the next released version of the HCP
Pipeline Scripts.

Gregory, thank you for pointing out the issue. Gordon, thank you for the
suggested changes.

Tim
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] mris_make_surfaces: undefined symbol ncerr in FreesurferPipelineBatch.sh

2015-01-29 Thread Timothy B. Brown
ntent-Type: text/plain; charset="utf-8"
>
>
Hi @all,
>
>
I downloaded the node timeseries for the HCP subjects. There are
different
>
amounts of "ICA-ROIs", that is 25,50,100,200,300.
>
>
I have to make a trade-off between amount of computation time, amount of
>
persons to include and what might be generally "prefered" or accepted by
>
reviewers.
>
>
For example, I thought of either using the 25 ROI's and analyse about 50
>
people or use the 100 ROI's and analyse 10 people. Both would require
>
approximately the same amount of computation time.
>
>
I personally prefer to use the 100 ROI's but on the other hand 25 ROI's
are
>
easier to handle computationally. But then I'm wondering if 25 ROI's is
>
"enough".
>
>
I know there is no general answer, but I just wanted to hear some
opinions
>
on that from you. In other words which amount of ROIs and people (25/50
or
>
100/10) would you choose and why?
>
>
Thanks in advance
>
>
David
>
-- next part --
>
An HTML attachment was scrubbed...
>
URL:
http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20150126/8b1167d3/attachment-0001.html
>
>
--
>
>
Message: 3
>
Date: Mon, 26 Jan 2015 11:41:13 +
>
From: Stephen Smith 
>
Subject: Re: [HCP-Users] Node timeseries - How many ROI's to use?
>
To: David Hofmann 
>
Cc: hcp-users 
>
Message-ID: 
>
Content-Type: text/plain; charset="utf-8"
>
>
Hi - separate from your main question, in general I have a "better
feeling" about netmat analysis from the 50 and 200 node datasets than
the other options. Please don't ask me to defend that (or be correct
about it!) - because it's just a gut feeling from having played with the
datasets a lot.
>
>
Wrt your main question - if your final analysis involves any
cross-subject modelling, then the analysis is *likely* to be most
limited by subject variability - and as a result probably the single
most important thing to maximise is subject numbers.
>
>
More generally, I would say you should aim to use all the data
(subjects), even if you have to wait a bit longer for the results!
>
>
Cheers, Steve.
>
>
>
>
>
> On 26 Jan 2015, at 11:35, David Hofmann
>  wrote:
>
>
>
> Hi @all,
>
>
>
> I downloaded the node timeseries for the HCP subjects. There are
> different amounts of "ICA-ROIs", that is 25,50,100,200,300.
>
>
>
> I have to make a trade-off between amount of computation time, amount
> of persons to include and what might be generally "prefered" or
> accepted by reviewers.
>
>
>
> For example, I thought of either using the 25 ROI's and analyse about
> 50 people or use the 100 ROI's and analyse 10 people. Both would
> require approximately the same amount of computation time.
>
>
>
> I personally prefer to use the 100 ROI's but on the other hand 25
> ROI's are easier to handle computationally. But then I'm wondering if
> 25 ROI's is "enough".
>
>
>
> I know there is no general answer, but I just wanted to hear some
> opinions on that from you. In other words which amount of ROIs and
> people (25/50 or 100/10) would you choose and why?
>
>
>
> Thanks in advance
>
>
>
> David
>
>
>
>
>
>
>
>
>
>
>
> ___
>
> HCP-Users mailing list
>
> HCP-Users@humanconnectome.org
>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
>
>
---
>
Stephen M. Smith, Professor of Biomedical Engineering
>
Associate Director, Oxford University FMRIB Centre
>
>
FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
>
+44 (0) 1865 222726 (fax 222717)
> st...@fmrib.ox.ac.uk http://www.fmrib.ox.ac.uk/~steve
> <http://www.fmrib.ox.ac.uk/~steve>
>
---
>
>
Stop the cultural?destruction?of Tibet <http://smithinks.net/>
>
>
>
>
>
-- next part --
>
An HTML attachment was scrubbed...
>
URL:
http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20150126/422d615b/attachment-0001.html
>
>
--
>
>
___________
>
HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
End of HCP-Users Digest, Vol 26, Issue 23
>
*
> ___
>
HC

Re: [HCP-Users] Resolving an issue

2015-01-29 Thread Timothy B. Brown

Please see my response of just a few moments ago to
dkumar.in...@gmail.com on this same mailing list. It appears that you
two are having the same issue.

Best Regards, Tim

On Thu, Jan 29, 2015, at 09:39, Allwyn cctoc wrote:
> Hi all,
>
While running freesurfer I encountered this problem
>
Mris_make_surfaces: symbol lookup error: Mris_make_surfaces: undefined
symbol :ncerr

> (Standard_in)2 : error : comparison in expression

> Can you pls help me fix these issue ?

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster

2015-02-05 Thread Timothy B. Brown
 "${command_line_specified_run_local}" ] ; then

> > >> echo "About to run

> > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh"

> > >> queuing_command=""

> > >> else

> > >> echo "About to use fsl_sub to queue or run

> > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh"

> > >> queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}"

> > >> fi"

> > >>

> > >>

> > >>

> > >>

> > >>

> > >> ___

> > >> HCP-Users mailing list

> > >> HCP-Users@humanconnectome.org

> > >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

> > >

> >

> --

> Malcolm Tobias

> 314.362.1594

>

>

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Regarding: fsl version 5.0.6

2015-04-12 Thread Timothy B. Brown
ials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Downloading Example Data

2015-05-29 Thread Timothy B. Brown
Dan,

I also just tested the link, and it worked for me. So, at least so far,
it seems that the link to download data is good.

I would suggest that without trying to download any data, just visit
https://db.humanconnectome.org in your browser and try to login to
ConnectomeDB there. Let's see if that works for you and you are able to
successfully log in. That will give us a place to start to see if this
is a problem with your access to ConnectomeDB itself or whether there is
some other problem.

Best Regards, Tim

On Fri, May 29, 2015, at 16:50, David Van Essen wrote:
> It works for me as well.
>
> David
>
> On May 29, 2015, at 4:47 PM, Glasser, Matthew
>  wrote:
>


>> Not sure as the link just worked for me. Are you getting a chance to
>> log in? You might also try a different browser.
>>
>> Peace,
>>
>> Matt.
>>
>> From: "Daniel P. Bliss"  Date: Friday, May 29,
>> 2015 at 3:44 PM To: "hcp-users@humanconnectome.org"
>>  Subject: [HCP-Users] Downloading
>> Example Data
>>
>> Hi HCPers,
>>
>> I'd like to test my installation of the HCP Pipelines using the
>> example data referred to in the Pipeline wiki[1], but when I click
>> either of the download links, I get an "Error: Forbidden" message.
>> This is despite the fact that I'm registered at
>> db.humanconnectome.org[2].
>>
>> What am I doing wrong?
>>
>> Many thanks, Dan
>> ___
>>
HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>>
>> The materials in this message are private and may contain Protected
>> Healthcare Information or other information of a sensitive nature. If
>> you are not the intended recipient, be advised that any unauthorized
>> use, disclosure, copying or the taking of any action in reliance on
>> the contents of this information is strictly prohibited. If you have
>> received this email in error, please immediately notify the sender
>> via telephone or return mail.
>> ___
>>
HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> _______
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



Links:

  1. 
https://github.com/Washington-University/Pipelines/wiki/v3.4.0-Release-Notes,-Installation,-and-Usage#getting-example-data
  2. http://db.humanconnectome.org/
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Diffusion Preprocessing

2015-06-22 Thread Timothy B. Brown
Hi Levi,

Yes, it would be helpful in investigating the error you are receiving to
see both the script you are running and a capture of the exact output
you are getting from a run of the script.  That output might help to
determine if the "ERROR: could not open file" being reported by the imcp
command is because it cannot find the file that it is supposed to copy
or because it cannot open/create the file it is supposed to be creating
as a copy.

Thanks, Tim

On Thu, Jun 18, 2015, at 14:32, levi solomyak wrote:
> Hello,
>
> I am working on running the diffusion preprocessing pipeline using the
> script from
> https://github.com/Washington-University/Pipelines/blob/master/Examples/Scripts/DiffusionPreprocessingBatch.sh
> as a reference.
>
> When it is supposed to copy the positive raw data to the working
> directory I keep getting an ERROR: could not open file (for the imcp
> command). The files themselves are not corrupted, and imcp works, and
> I checked that my StudyFolder path and EnvScript are correct. I am new
> to preprocessing HCP data so its very possible that I'm missing
> something basic.
>
> Does anyone know how to fix this kind of issue? I would be happy to
> provide my script for reference.
>
> Thank you so much!
>
Levi Solomyak
>
>
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] warning about FSL_DIR setting

2015-06-22 Thread Timothy B. Brown
Hi Michael,

Thank you for sharing the results of your debugging efforts.

In response to your message and to some difficulties/questions that
students had in the recent HCP course, I have changed the example
SetUpHCPPipeline.sh script in the Examples/Scripts directory so that it
looks like the code below (my additions are in red.) I believe these
changes should ensure that FreeSurfer as used in the Pipelines will use
the same version of FSL as the pipelines themselves are using.

These changes have been checked in to the repository, but are not yet
included in a released version of the scripts. They will be included in
the next release.

Please feel free to let me know if you see any further error in
the logic.

Thanks,

Tim

#!/bin/bash

echo "This script must be SOURCED to correctly setup the environment
prior to running any of the other HCP scripts contained here"

# Set up FSL (if not already done so in the running environment)
# Uncomment the following 2 lines (remove the leading #) and correct the
# FSLDIR setting for your setup

#export FSLDIR=/usr/share/fsl/5.0

#. ${FSLDIR}/etc/fslconf/fsl.sh

# Let FreeSurfer know what version of FSL to use

# FreeSurfer uses FSL_DIR instead of FSLDIR to determine the FSL version

export FSL_DIR="${FSLDIR}"

# Set up FreeSurfer (if not already done so in the running environment)
# Uncomment the following 2 lines (remove the leading #) and correct the
# FREESURFER_HOME setting for your setup

#export FREESURFER_HOME=/usr/local/bin/freesurfer

#. ${FREESURFER_HOME}/SetUpFreeSurfer.sh > /dev/null 2>&1

# Set up specific environment variables for the HCP Pipeline

export HCPPIPEDIR=${HOME}/projects/Pipelines

export CARET7DIR=${HOME}/tools/workbench/bin_rh_linux64

export HCPPIPEDIR_Templates=${HCPPIPEDIR}/global/templates

export HCPPIPEDIR_Bin=${HCPPIPEDIR}/global/binaries

export HCPPIPEDIR_Config=${HCPPIPEDIR}/global/config

export HCPPIPEDIR_PreFS=${HCPPIPEDIR}/PreFreeSurfer/scripts

export HCPPIPEDIR_FS=${HCPPIPEDIR}/FreeSurfer/scripts

export HCPPIPEDIR_PostFS=${HCPPIPEDIR}/PostFreeSurfer/scripts

export HCPPIPEDIR_fMRISurf=${HCPPIPEDIR}/fMRISurface/scripts

export HCPPIPEDIR_fMRIVol=${HCPPIPEDIR}/fMRIVolume/scripts

export HCPPIPEDIR_tfMRI=${HCPPIPEDIR}/tfMRI/scripts

export HCPPIPEDIR_dMRI=${HCPPIPEDIR}/DiffusionPreprocessing/scripts

export HCPPIPEDIR_dMRITract=${HCPPIPEDIR}/DiffusionTractography/scripts

export HCPPIPEDIR_Global=${HCPPIPEDIR}/global/scripts

export HCPPIPEDIR_tfMRIAnalysis=${HCPPIPEDIR}/TaskfMRIAnalysis/scripts

export MSMBin=${HCPPIPEDIR}/MSMBinaries



On Wed, Jun 17, 2015, at 14:49, m s wrote:
> Version 3.4.0
>
> Hi again,
>
> We've come across a problem that others may want to know about.
>
> The FSL setup scripts defines FSLDIR itself. FSL_DIR must be defined
> for freesurfer, to know what fsl version to use. The freesurfer setup
> script SetUpFreeSurfer.sh calls FreeSurferEnv.sh, which only sets
> FSL_DIR if it's not already set. So if freesurfer is setup in a user's
> .bash_profile (or similar), running SetUpFreeSurfer.sh from an HCP
> pipelines setup script will NOT change FSL_DIR if you're already
> pointing to a diff version of FSL than your .bash_profile.
>
> -M
>
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] -openmp option issue

2015-06-22 Thread Timothy B. Brown
Hi again Michael,

Thanks again for your input.

I have made the following changes to the FreeSurferPipeline.sh script.
They are not quite the same as your changes, because for the time being
I want to maintain a level of backward compatibility. That is, if NSLOTS
is set, we'll use it to determine the input value for the -openmp
option. If it is not set, I want to continue to behave just as before
(use 8 as the value for the -openmp option.)

Using the current value of 8 as the default for when NSLOTS is not set
might cause problems if specifying more cores than are actually
available to the recon-all command causes problems with the recon-all
run.  I have not been able to find any documentation that indicates what
would happen in that case. So I'm being a bit conservative here and not
changing the default from the current value of 8 to a new value of 1.

My changes look like the following. They have been checked into the
repository, are not yet in a released version, and should end up
(unless someone convinces me to set the default to 1 instead of 8) in
the next release.

cp "$SubjectDIR"/"$SubjectID"/mri/brainmask.auto.mgz
"$SubjectDIR"/"$SubjectID"/mri/brainmask.mgz

# Both the SGE and PBS cluster schedulers use the environment variable
# NSLOTS to indicate the number of cores

# a job will use.  If this environment variable is set, we will use it
# to determine the number of cores to

# tell recon-all to use.

if [[ -z ${NSLOTS} ]]

then

num_cores=8

else

num_cores="${NSLOTS}"

fi

recon-all -subjid $SubjectID -sd $SubjectDIR -autorecon2 -nosmooth2 -
noinflate2 -nocurvstats -nosegstats -openmp ${num_cores}


Best Regards,

Tim

On Tue, Jun 16, 2015, at 11:24, m s wrote:
> Version 3.4.0
>
> Hi,
>
> The Pipelines-3.4.0//FreeSurfer/FreeSurferPipeline.sh script has a hard-
> coded value of 8 for the -openmp argument in the call to recon-all.
> This doesn't seem like a good idea. Seems it should be an option and
> default to 1.
>
> We've modified it to work with NSLOTS for SGE on our cluster. Here's
> the diff if you're interested:
>
> < cp "$SubjectDIR"/"$SubjectID"/mri/brainmask.auto.mgz
> "$SubjectDIR"/"$SubjectID"/mri/brainmask.mgz < < # CFN mod - use
> $NSLOTS to define multithreading < if [[ -z "$NSLOTS" ]]; then <
> NSLOTS=1 < fi  < < recon-all -subjid $SubjectID -sd $SubjectDIR
> -autorecon2 -nosmooth2 -noinflate2 -nocurvstats -nosegstats -
> openmp $NSLOTS
> ---
> > cp "$SubjectDIR"/"$SubjectID"/mri/brainmask.auto.mgz
> > "$SubjectDIR"/"$SubjectID"/mri/brainmask.mgz recon-all -subjid
> > $SubjectID -sd $SubjectDIR -autorecon2 -nosmooth2 -noinflate2 -
> > nocurvstats -nosegstats -openmp 8
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] inf loop bug in scripts

2015-06-22 Thread Timothy B. Brown
Michael,

Once again, thanks for pointing this problem out.

I have altered the following scripts in the Examples/Scripts directory
so that the loop which is parsing command line options reports an error
on unrecognized option specifications and exits from the script. They do
not supply usage information as part of the error message though.
 * DiffusionPreprocessingBatch.7T.sh
 * DiffusionPreprocessingBatch.sh
 * FreeSurferPipelineBatch.sh
 * GenericfMRISurfaceProcessingPipelineBatch.7T.sh
 * GenericfMRISurfaceProcessingPipelineBatch.sh
 * GenericfMRIVolumeProcessingPipelineBatch.7T.sh
 * GenericfMRIVolumeProcessingPipelineBatch.sh
 * PostFreeSurferPipelineBatch.sh
 * PreFreeSurferPipelineBatch.sh
 * TaskfMRIAnalysisBatch.sh These changes are checked in and should make
   it into the next release.

Tim

On Tue, Jun 16, 2015, at 11:22, m s wrote:
> Version 3.4.0
>
> Hi,
>
> A number of the scripts in Example/Scripts fail to recognize malformed
> arguments and get stuck in an infinite loop, e.g.
> PreFreeSurferPipelineBatch.sh. A couple scripts handle this correctly,
> e.g. generate_level1_fsf.sh.
>
> -M
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Diffusion Preprocessing

2015-06-23 Thread Timothy B. Brown
Hi Levi,

The first thing to note is that in your script you have the
following line:

StudyFolder="$/projects/IndivRITL/data/leviDiff/disk1" #Location of
Subject folders (named by subjectID)

The dollar sign ($) symbol at the beginning of the quoted text doesn't
really make sense or belong there.

In a bash script, the $ symbol is supposed to precede a variable name,
but in the above statement the $ is not being used in front of a
variable name. It is being used in front of a full path specification:
/projects/IndivRITL/data...

Notice that the echoed output from DiffPreprocPipeline_PreEddy.sh is
telling you that the list of Positive Input Images that it will try to
work with is:
 * $/projects/IndivRITL/data/leviDiff/disk1/128632/unprocessed/3T/Diffu-
   sion/128632_3T_DWI_dir95_RL.nii.gz
 * $/projects/IndivRITL/data/leviDiff/disk1/128632/unprocessed/3T/Diffu-
   sion/128632_3T_DWI_dir96_RL.nii.gz
 * $/projects/IndivRITL/data/leviDiff/disk1/128632/unprocessed/3T/Diffu-
   sion/128632_3T_DWI_dir97_RL.nii.gz Assuming that:
 * the subject data being used is HCP's subject 128632, and thus more
   importantly assuming that the DWI_dir95, DWI_dir96, and DWI_dir97
   files actually exist
 * the data is properly installed in the
   /projects/IndivRITL/data/leviDiff/disk1 directory then it seems
   likely that your problem is that $ at the front of the path to each
   file.  No files exist at a path that starts with $. Thus imcp cannot
   find/open the files to copy.

So, let's start by removing that $ in the setting of your
StudyFolder variable.

Best Regards,

Tim

On Tue, Jun 23, 2015, at 10:09, levi solomyak wrote:
> Hi Tim,
>
> Below, I am copying both the error and the script.
>
> Thank you so much for your help!
>
> Best, Levi
>
> Error: The precise error I'm getting is:
>
> DiffPreprocPipeline_PreEddy.sh - PosInputImages: $/projects/IndivRITL-
> /data/leviDiff/disk1/128632/unprocessed/3T/Diffusion/128632_3T_DWI_di-
> r95_RL.nii.gz $/projects/IndivRITL/data/leviDiff/disk1/128632/unproce-
> ssed/3T/Diffusion/128632_3T_DWI_dir96_RL.nii.gz $/projects/IndivRITL/-
> data/leviDiff/disk1/128632/unprocessed/3T/Diffusion/128632_3T_DWI_dir-
> 97_RL.nii.gz ERROR: Could not open file

> Usage: /usr/local/fsl/bin/imcp  

> Usage: /usr/local/fsl/bin/imcp   ...  

> Copies images from file1 to file2 (including all extensions)

> This is my script:

> #!/bin/bash

> #source opts.shlib

> #Diffusion preprocessing for the HCP Data

> #get_batch_options $@

> #Script for running this on the supercomputer

> StudyFolder="$/projects/IndivRITL/data/leviDiff/disk1" #Location of
> Subject folders (named by subjectID)

> Subjlist="108121" #Space delimited list of subject IDs #Pipeline
> environment script

> HCPPipe="/usr/local/HCP_Pipelines"

> EnvScript="${HCPPipe}/Examples/Scripts/SetUpHCPPipeline.sh"

>

> #Set up pipeline environment variables and software

> . ${EnvScript} #sets up the pipeline environment

> # Log the originating call

> echo "$@"

>

> for Subject in $Subjlist ; do

> echo $Subject

> EchoSpacing=0.78

> PEdir=1 #Use 1 for Left-Right Phase Encoding, 2 for Anterior-Posterior

>

> #Input Variables

> SubjectID="$Subject" #Subject ID Name

> RawDataDir="$StudyFolder/$SubjectID/unprocessed/3T/Diffusion/" #Folder
> where unprocessed diffusion data are

>

> # Data with positive Phase encoding direction. Up to N>=1 series (here
> N=3), separated by @. (LR in HCP data, AP in 7T HCP data)

> PosData="${RawDataDir}/${SubjectID}_3T_DWI_dir95_RL.nii.gz@${RawDataD-
> ir}/${SubjectID}_3T_DWI_dir96_RL.nii.gz@${RawDataDir}/${SubjectID}_3T-
> _DWI_dir97_RL.nii.gz"

>

> # Data with negative Phase encoding direction. Up to N>=1 series (here
> N=3), separated by @. (RL in HCP data, PA in 7T HCP data)

> NegData="${RawDataDir}/${SubjectID}_3T_DWI_dir95_LR.nii.gz@${RawDataD-
> ir}/${SubjectID}_3T_DWI_dir96_LR.nii.gz@${RawDataDir}/${SubjectID}_3T-
> _DWI_dir97_LR.nii.gz"

>

> #Scan Setings

> #EchoSpacing=0.78

> #PEdir=1 #Use 1 for Left-Right Phase Encoding, 2 for Anterior-
> Posterior

>

> Gdcoeffs="NONE" # Set to NONE to skip gradient distortion correction

> echo "About to run
> ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh"

> queuing_command=""

>

> ${queuing_command}
> ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh \

> --posData="${PosData}" --negData="${NegData}" \

> --path="${StudyFolder}" --subject="${SubjectID}" \

> --echospacing="${EchoSpacing}" --PEdir=${PEdir} \

> --gdcoeffs="${Gdcoeffs}" \

> --printcom=$PRINTCOM

>

> done

>

>

Re: [HCP-Users] -openmp option issue

2015-06-23 Thread Timothy B. Brown
Agreed. My plan is to add a number of cores command line option as part
of converting all the scripts to use the same mechanism for handling
setting of default options, retrieving of command line specified
options, reporting on options specified, supporting a --help option to
show usage information, and reporting errors and exiting on
unrecognized options.

Some of the scripts have already been modified to do all this in a
standardized way. Making all the rest of the scripts behave similarly is
pretty high on my priority list, but not yet at the top.

Tim

On Mon, Jun 22, 2015, at 16:49, m s wrote:
> Thanks Tim. FWIW it might add flexibility to make the # of cores a
> command line option to the script.
>
> -M
>
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] accessing files directly via XNAT / Amazon S3 / boto

2015-06-24 Thread Timothy B. Brown
Hi Ben,

You should be able to access the files you are trying to access by
prepending the following to a URI for the files:

https://db.humanconnectome.org/data/archive/projects/HCP_500/subjects/100307/experiments/100307_CREST/resources/100307_CREST/files

from the end of that prefix, you can attach the path to your desired
file in the standard packages and Connectome-in-a-Box directory
structure.

So to get the 100307_3T_tfMRI_EMOTION_LR.nii.gz file you would use:

https://db.humanconnectome.org/data/archive/projects/HCP_500/subjects/100307/experiments/100307_CREST/resources/100307_CREST/files/unprocessed/3T/tfMRI_EMOTION_LR/100307_3T_tfMRI_EMOTION_LR.nii.gz

(Yep, that's pretty long. But the expectation is that people are mostly
going to use this in scripts, so the length shouldn't be a big issue.)

Notice that from the directory name unprocessed on to the end of the
URI, the directory structure matches what is available in a subject's
directory in S3, in Connectome-in-a-Box distributions, and in unzipped
package files for the subject that are downloaded from the
db.humanconnectome.org web site.

Since this data can only be accessed by authorized users of ConnectomeDB
who have accepted the data use terms, you will need to supply
credentials one way or another to successfully access a file using a URI
like the above.

For example, if you are using CURL you could use the -u command line
option and supply a ConnectomeDB username and password in the form -u
username:password.

If you are accessing a lot of file using this mechanism (say in a
script), in order to avoid proliferating a large number of sessions, you
should get a session id using commands something like:

user="this_is_my_username" password="this_is_my_password" jsession=`curl
-u ${user}:${password} https://db.humanconnectome.org`

Then you can using the -b command line option with CURL

curl -b "JSESSIONID=$jsession" -O **

As you would expect, you are not limited to using CURL for this. Any
mechanism that allows you to retrieve contents via a URI/URL should
work. So, for example, you could enter the above URL into the address
bar of a browser. Then you would be prompted to enter credentials before
you get access to the file. (Of course, this does you little good as a
scripting solution.)

I am not familiar with Amazon's boto package, but it seems from a
follow up message that you sent, you may have already tracked that
problem to a boto bug.

Hope that is helpful,

Tim

On Fri, Jun 12, 2015, at 19:30, Ben Cipollini wrote:
> Hi,
>
> I'm trying to access HCP files via scripting. I know the directory
> structure through the appendix III definitions. However, I am not sure
> how to access the resources through XNAT nor Amazon S3.
>
>
> In XNAT, I can't figure out what URL to construct to access any file.
> For example, what would be the URL to access the file 100307/tfMRI_EM-
> OTION_LR/100307/tfMRI_EMOTION_LR/100307_3T_tfMRI_EMOTION_LR.nii.gz ?
>
>
> Alternately, I'm trying to access the Amazon S3 files through Amazon's
> boto package. I can open a connection with my S3 credentials, and I
> see a "hcp-openaccess" bucket. When I try to list the bucket contents
> (for file_key in bucket.list(): print file_key), I get access denied.
>
> Has anybody tried accessing the S3 data in this way that can
> share an example script? Or any suggestions for other ways to
> script the access?
>
>
> Thanks, Ben
> ___
>
HCP-Users">HCP-Users mailing list
>
HCP-Users">HCP-Users@humanconnectome">HCP-Users">HCP-
us...@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] accessing files directly via XNAT / Amazon S3 / boto

2015-06-24 Thread Timothy B. Brown
Ben,

Just a quick follow-up.  If you want to access the files in the S3
bucket, it might be worth looking in to installing S3cmd
(http://s3tools.org/s3cmd). It is a command line tool for accessing
files in S3 buckets, and I've successfully used it to access some files
in the HCP OpenAccess bucket.  Being a command line tool, it should be
easy to use in a script.

Tim

On Wed, Jun 24, 2015, at 15:02, Timothy B. Brown wrote:
> Hi Ben,
>
> You should be able to access the files you are trying to access by
> prepending the following to a URI for the files:
>
> https://db.humanconnectome.org/data/archive/projects/HCP_500/subjects/100307/experiments/100307_CREST/resources/100307_CREST/files
>
> from the end of that prefix, you can attach the path to your desired
> file in the standard packages and Connectome-in-a-Box directory
> structure.
>
> So to get the 100307_3T_tfMRI_EMOTION_LR.nii.gz file you would use:
>
> https://db.humanconnectome.org/data/archive/projects/HCP_500/subjects/100307/experiments/100307_CREST/resources/100307_CREST/files/unprocessed/3T/tfMRI_EMOTION_LR/100307_3T_tfMRI_EMOTION_LR.nii.gz
>
> (Yep, that's pretty long. But the expectation is that people are
> mostly going to use this in scripts, so the length shouldn't be a
> big issue.)
>
> Notice that from the directory name unprocessed on to the end of the
> URI, the directory structure matches what is available in a subject's
> directory in S3, in Connectome-in-a-Box distributions, and in unzipped
> package files for the subject that are downloaded from the
> db.humanconnectome.org web site.
>
> Since this data can only be accessed by authorized users of
> ConnectomeDB who have accepted the data use terms, you will need to
> supply credentials one way or another to successfully access a file
> using a URI like the above.
>
> For example, if you are using CURL you could use the -u command line
> option and supply a ConnectomeDB username and password in the form -u
> username:password.
>
> If you are accessing a lot of file using this mechanism (say in a
> script), in order to avoid proliferating a large number of sessions,
> you should get a session id using commands something like:
>
> user="this_is_my_username" password="this_is_my_password"
> jsession=`curl -u ${user}:${password} https://db.humanconnectome.org`
>
> Then you can using the -b command line option with CURL
>
> curl -b "JSESSIONID=$jsession" -O **
>
> As you would expect, you are not limited to using CURL for this. Any
> mechanism that allows you to retrieve contents via a URI/URL should
> work. So, for example, you could enter the above URL into the address
> bar of a browser. Then you would be prompted to enter credentials
> before you get access to the file. (Of course, this does you little
> good as a scripting solution.)
>
> I am not familiar with Amazon's boto package, but it seems from a
> follow up message that you sent, you may have already tracked that
> problem to a boto bug.
>
> Hope that is helpful,
>
> Tim
>
> On Fri, Jun 12, 2015, at 19:30, Ben Cipollini wrote:
>> Hi,
>>
>> I'm trying to access HCP files via scripting. I know the directory
>> structure through the appendix III definitions. However, I am not
>> sure how to access the resources through XNAT nor Amazon S3.
>>
>>
>> In XNAT, I can't figure out what URL to construct to access any file.
>> For example, what would be the URL to access the file 100307/tfMRI_E-
>> MOTION_LR/100307/tfMRI_EMOTION_LR/100307_3T_tfMRI_EMOTION_LR.nii.gz ?
>>
>>
>> Alternately, I'm trying to access the Amazon S3 files through
>> Amazon's boto package. I can open a connection with my S3
>> credentials, and I see a "hcp-openaccess" bucket. When I try to list
>> the bucket contents (for file_key in bucket.list(): print file_key),
>> I get access denied.
>>
>> Has anybody tried accessing the S3 data in this way that can share an
>> example script? Or any suggestions for other ways to script the
>> access?
>>
>>
>> Thanks, Ben
>> ___
>>
HCP-Users">HCP-Users mailing list
>>
HCP-Users">HCP-Users@humanconnectome">HCP-Users">HCP-
us...@humanconnectome.org
>>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

> --
> Timothy B. Brown Business & Technology Application Analyst III
> Pipeline Developer (Human Connectome Project) tbbrown(at)wustl.edu
> 
> The material in this message is private and may contain Protected
> Healthcare Information (PHI). If you are not 

Re: [HCP-Users] eddy_parameters file

2015-10-20 Thread Timothy B. Brown
Joseph,

On 16 Oct 2015 at 17:14, Matthew Glasser  wrote:
>
> I don’t know about that, but it was an oversight to leave this file
> out, as I had intended for it to be available.  From Glasser et al
> 2013 Pipelines paper:
>
> "All these distortions are corrected in a single resampling step using
> the “eddy” tool in FSL5 (Andersson et al., 2012; Sotiropoulos it et
> al., this issue). Eddy also produces a text file that includes the
> modeled motion parameters and the parameters of the eddy current
> fields. The first six columns contain the rigidmotion parameters  translation in mm>rotation in radians>  
> and the last 10 contain the eddy current distortion parameters."

The eddy_parameters file will be included in the Diffusion Preprocessed
package as of the next data release.

In the meantime, assuming you have a ConnectomeDB account, you should be
able to retrieve the file using the XNAT REST API with a URI like the
following:

https://db.humanconnectome.org/data/projects/HCP_500/subjects/100307/experiments/100307_3T/resources/Diffusion_preproc/files/Diffusion/eddy/eddy_unwarped_images.eddy_parameters

You will, of course, need to replace both occurrences of 100307 in the
above with the subject-id number for the subject for which you would
like to download the file.

See
http://www.mail-archive.com/hcp-users%40humanconnectome.org/msg01347.html
for some further information and links about using the XNAT REST API.

Tim
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Verifying HCP data versions

2015-12-15 Thread Timothy B. Brown
en
>used in HCP to date: version r177 for subjects scanned in Q1
>through mid-Q3, version r227 for subjects scanned mid-Q3 and after.
>“) Again, is there any scriptable way to check which version was
>used for data that is already downloaded? (Same reason as above)
>
> Many thanks,
>
> M@
> --
> Matthew George Liptrot
>
>
> Department of Computer Science University of Copenhagen & Section for
> Cognitive Systems Department of Applied Mathematics and Computer
> Science Technical University of Denmark
>
> http://about.me/matthewliptrot
>
>
> ___________
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Having problems with installing aspera, upgrade chromium, download data for doing tutorials with workbench....

2015-12-16 Thread Timothy B. Brown
I use an Ubuntu 15.04 VM. My Ubuntu VM is running with Windows 7 as the
host operating system.  Even from back when I used Ubuntu 14.04 or
Ubuntu 14.10, I have never been able to succeed in getting Aspera to
work with either Chromium or Firefox when running in my Ubuntu VM.  Even
after successfully executing the Aspera installation script and getting
an installation complete message, the browser still complains that I do
not have the Aspera plug-in for downloading.

When I need to do an Aspera-based download, I typically fire up a
browser under the control of Windows (not in the VM) and download the
file to a folder that is shared with the VM (or download the file and
then move it to a folder that is shared between the host system and the
VM).  The Aspera plug in has worked fine for me that way.

Hope that is somewhat helpful,

Tim

On Wed, Dec 16, 2015, at 14:36, Gregory Butron wrote:
> Just a note, I was never able to get aspera to work properly in ubuntu
> 14.04. Instead I used dragon disk to connect to the amazon server
> directly and it worked very well.

> On Dec 16, 2015 2:00 PM, "Timothy Coalson"  wrote:

>> With a .sh file, you don't use "apt-get install", it is a script that
>> you run directly in a terminal:
>>
>> tim@timsdev:~/Downloads$ ./aspera-connect-3.6.0.106805-linux-64.sh
>>
>> Installing Aspera Connect
>>
>> Deploying Aspera Connect (/home/tim/.aspera/connect) for the
>> current user only. Restart firefox manually to load the Aspera
>> Connect plug-in
>>
>> Install complete. tim@timsdev:~/Downloads$
>>
>> Tim
>>
>>
>> On Wed, Dec 16, 2015 at 1:21 PM, Rosalia Dacosta Aguayo
>>  wrote:

>>> Dear Caret and Workbench team,
>>>
>>> I have registered to workbench and I I had downloaded workbench for
>>> linux 64bitsok...no problem with this...and with the
>>> tutorial...the problem is when trying to download your images in
>>> order to do the tutorials.
>>>
>>> The problem is with downloading the data it aks me for intaslling
>>> aspera and other pluginwith the other I tell that yes, but it
>>> tells me that is not supported and with aspera tells me something
>>> similar. I am working in a Virtual Maxine, with UBUNTU 14.04 and
>>> Neurodebian installedany suggestion to cope with this problem? I
>>> have tried to update chromium in the neurodebian package but it has
>>> been impossible...I have directly gone to the aspera web page and I
>>> have downloaded it...a file.sh...but I try to execute it and nothing
>>> seems to work... (I have aspera in my Destktop...). I have wrote in
>>> my shell: sudo apt-get install (and all the name of 
>>> aspera:aspera-connect-3.6.1.110647-linux-
>>> 64.sh but nothing works wellI have to install plugins for
>>> chromium and I don't find how to do it, although I have been looking
>>> for in internet...it seems that I have to change some options in my
>>> chromium but those options does not appear...
>>>

>>>
>>> The other question is that  you say that you no longer actively
>>> develop Caret, but I need to do a tutorial with the last version and
>>> download the images necessary for doing thisany suggestion with
>>> this would be highly appreciate? I would lilke to do tutorials
>>> with caret as well as workbench...
>>>
>>> Yours sincerely,
>>>
>>> Rosalia.
>>> ___
>>>
HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

>> ___
>>
HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Having problems with installing aspera, upgrade chromium, download data for doing tutorials with workbench....

2015-12-16 Thread Timothy B. Brown
Hi Rosalia,

How to create a folder that is shared between the host and guest
(VM) would depend on what tool you are using to run your VM in the
first place.

If you are using VMware (Player or Workstation) I would suggest you have
a look at the document *VMware Workstation 5.0 Using Shared Folders *at
https://www.vmware.com/support/ws5/doc/ws_running_shared_folders.html
This is written for VMware Workstation 5.0, but it seems to be valid for
VMware Player and later versions of VMware Workstation.

If you are using Oracle VirtualBox, I would suggest you have a look at
the document *VirtualBox Shared Folders *at
http://virtuatopia.com/index.php/VirtualBox_Shared_Folders.

If you are using some other program to run your VM, you'll need to
check the documentation or do a web search for documentation that will
guide you.

Please reply to the HCP-Users list so that others who have the same
question can also benefit.

Best Regards,

Tim

On Wed, Dec 16, 2015, at 15:01, Rosalia Dacosta Aguayo wrote:
> Hi Timothy, ok, how can I create a folder shared with VM ?
>
> 2015-12-16 21:55 GMT+01:00 Timothy B. Brown :

>> I use an Ubuntu 15.04 VM. My Ubuntu VM is running with Windows 7 as
>> the host operating system.  Even from back when I used Ubuntu 14.04
>> or Ubuntu 14.10, I have never been able to succeed in getting Aspera
>> to work with either Chromium or Firefox when running in my Ubuntu VM.
>> Even after successfully executing the Aspera installation script and
>> getting an installation complete message, the browser still complains
>> that I do not have the Aspera plug-in for downloading.
>>
>> When I need to do an Aspera-based download, I typically fire up a
>> browser under the control of Windows (not in the VM) and download the
>> file to a folder that is shared with the VM (or download the file and
>> then move it to a folder that is shared between the host system and
>> the VM).  The Aspera plug in has worked fine for me that way.
>>
>> Hope that is somewhat helpful,
>>
>> Tim
>>
>> On Wed, Dec 16, 2015, at 14:36, Gregory Butron wrote:
>>> Just a note, I was never able to get aspera to work properly in
>>> ubuntu 14.04. Instead I used dragon disk to connect to the amazon
>>> server directly and it worked very well.

>>> On Dec 16, 2015 2:00 PM, "Timothy Coalson"  wrote:

>>>> With a .sh file, you don't use "apt-get install", it is a script
>>>> that you run directly in a terminal:
>>>>
>>>> tim@timsdev:~/Downloads$ ./aspera-connect-3.6.0.106805-linux-64.sh
>>>>
>>>> Installing Aspera Connect
>>>>
>>>> Deploying Aspera Connect (/home/tim/.aspera/connect) for the
>>>> current user only. Restart firefox manually to load the Aspera
>>>> Connect plug-in
>>>>
>>>> Install complete. tim@timsdev:~/Downloads$
>>>>
>>>> Tim
>>>>
>>>>
>>>> On Wed, Dec 16, 2015 at 1:21 PM, Rosalia Dacosta Aguayo
>>>>  wrote:

>>>>> Dear Caret and Workbench team,
>>>>>
>>>>> I have registered to workbench and I I had downloaded workbench
>>>>> for linux 64bitsok...no problem with this...and with the
>>>>> tutorial...the problem is when trying to download your images in
>>>>> order to do the tutorials.
>>>>>
>>>>> The problem is with downloading the data it aks me for intaslling
>>>>> aspera and other pluginwith the other I tell that yes, but it
>>>>> tells me that is not supported and with aspera tells me something
>>>>> similar. I am working in a Virtual Maxine, with UBUNTU 14.04 and
>>>>> Neurodebian installedany suggestion to cope with this problem?
>>>>> I have tried to update chromium in the neurodebian package but it
>>>>> has been impossible...I have directly gone to the aspera web page
>>>>> and I have downloaded it...a file.sh...but I try to execute it and
>>>>> nothing seems to work... (I have aspera in my Destktop...). I have
>>>>> wrote in my shell: sudo apt-get install (and all the name of 
>>>>> aspera:aspera-connect-3.6.1.110647-linux-
>>>>> 64.sh but nothing works wellI have to install plugins for
>>>>> chromium and I don't find how to do it, although I have been
>>>>> looking for in internet...it seems that I have to change some
>>>>> options in my chromium but those options does not appear...
>>>>>


Re: [HCP-Users] Error in FreeSurferPipeline

2015-12-18 Thread Timothy B. Brown
Hi Mahmoud,

Short answer:

Because of directory structure conventions and file naming conventions
used in the pipeline scripts, you cannot rename the subject directory
and expect that the pipelines will run without (at least) also renaming
a number of other files in the subject directory. So renaming the
subject directory will break quite a number of things.

Long answer:

The FreeSurferPipeline in particular and the other pipeline scripts in
general depend upon a directory structure for the data that has a root
*study* folder in which each subdirectory of that *study* folder is
named for the subject id.  So if the data you are processing are in a
directory named my_study, and you had only two subjects with subject ids
xyz and abc, then you would need to have xyz and abc subdirectories in
your my_study directory.

Seems like you've got that all set up correctly initially.

But...the pipeline scripts also depend upon file naming conventions that
use the subject id in the file names.  For example, if the subject
subdirectory is named xyz, then a number of files in that directory tree
will be expected to be named starting with xyz. For example, there is an
expectation that there will be a file named
my_study/xyz/unprocessed/3T/T1_MPR1/xyz_3T_T1w_MPR1.nii.gz.  Notice that
the two places the subject id is used have to match. Similarly, many of
the pipelines will generate output files that are used by subsequent
pipelines using a naming convention in which the names of the generated
files will start with the subject id.

So, I suspect that this is the problem you are encountering.  When you
run FreeSurferPipeLineBatch and tell it to process subject xyz_1, the
FreeSurferPipeline script is expecting to find a number of input files
with names that start with xyz_1. But they're not there because your
xyz_1 directory is just a copy of the original xyz directory.  So many
of the files in it start with simply xyz.

Hope that helps,

Tim

On Fri, Dec 18, 2015, at 07:55, Mahmoud wrote:
> Dear experts,
>
> After running the PreFreeSurferPipelineBatch for subjectID = xyz I
> changed the ID (i.e. renamed the subject directory name) to xyz_1 and
> ran the FreeSurferPipeLineBatch which ended up with this error in the
> error log file (I just copied the few last lines):
>
> measuring cortical thickness... writing cortical thickness estimate to
> 'thickness' file. positioning took 5.6 minutes Error: no output
> filename specified!
>
> Is this because of that name change? if so, is there any
> solution to it ?
>
> Thank you! Mahmoud
> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Error in FreeSurferPipeline

2015-12-29 Thread Timothy B. Brown
Hi Mahmoud,

Sorry for the delay in answering your question below. I was away for the
holidays for a while.

Have you already resolved the issue you describe (the "no output
filename specified" error in the FreeSurferPipeline)?  I noticed a
subsequent message from you to the hcp-users list in which you state:

I've applied HCP pipeline scripts to our structural data and it looks
like everything goes well to the end of PostFreeSurferPipelline.

That being the case, does that mean that you've already resolved the
problem with the "Error: no output filename specified!" in the
FreeSurferPipeline?

If you have, you can likely ignore the remainder of this message.

-

I have taken a look at the log files that you sent, and it appears to me
that the FreeSurferHiresPial.sh script (which is invoked as part of the
FreeSurferPipeline) is successfully running through line 52 in the
current latest version of the FreeSurferHiresPial.sh script. That line
looks like:

52: mri_surf2surf --s $SubjectID --sval-xyz pial.T2 --reg $regII
$mridir/orig.mgz --tval-xyz --tval pial --hemi rh

After that line, there are statements that extract some information from
the brain.finalsurfs.mgz file and create the
//T1w//mri/c_ras.mat matrix file.

54: MatrixX=`mri_info $mridir/brain.finalsurfs.mgz | grep "c_r" | cut -d
"=" -f 5 | sed s/" "/""/g`

55: MatrixY=`mri_info $mridir/brain.finalsurfs.mgz | grep "c_a" | cut -d
"=" -f 5 | sed s/" "/""/g`

56: MatrixZ=`mri_info $mridir/brain.finalsurfs.mgz | grep "c_s" | cut -d
"=" -f 5 | sed s/" "/""/g`

57: echo "1 0 0 ""$MatrixX" > $mridir/c_ras.mat

58: echo "0 1 0 ""$MatrixY" >> $mridir/c_ras.mat

59: echo "0 0 1 ""$MatrixZ" >> $mridir/c_ras.mat

60: echo "0 0 0 1" >> $mridir/c_ras.mat

Next is a call to mris_convert on line 62 of the
FreeSurferHiresPial.sh script.

62: mris_convert "$surfdir"/lh.white "$surfdir"/lh.white.surf.gii

I don't seem to find any evidence in your log files that the call to
mris_convert is succeeding.  So your run seems to be ending with the
"Error: no output filename specified!" message somewhere between lines
54 and 62.

It seems a bit unlikely that the problem is in lines 54 through 60, but
just to check, can you confirm for me that the file
//T1w//mri/c_ras.mat exists? In the case
shown in your logs, I believe this file should actually be named:
/home/mydata/HCPReady/111_Sigma7/T1w/111_Sigma7/mir/c_ras.mat.  If it
exists, is it a 4 line text file that looks something like the
following?

1 0 0 -1.

 1 0 -17.5000

 0 1 19.

 0 0 1

The last numeric values on each of the first three lines will probably
differ from the above.

Assuming that the c_ras.mat file is successfully created, then I would
note that line 62 is the first use of the $surfdir variable in the
FreeSurferHiresPial.sh script (after setting its value).  In the case
shown in your log files, the $surfdir variable should have a value of
/home/mydata/HCPReady/111_Signma7/T1w/111_Sigma7/surf.

Could you please verify that such a directory exists and is writable?

Thanks,

Tim



On Tue, Dec 22, 2015, at 07:06, Mahmoud wrote:
> Hello Tim and others,
>
> Following the previous posts, I renamed my dataset directory and ran
> the PreFreeSurferPipeline on it and it looks like finished
> successfully. After running the FreeSurferPipeline I got the same
> error "Error: no output filename specified!" . I've attached the
> FreeSurferPipeline output and error log here for you reference.
>
> I appreciate your help and time.
>
> Thank you, Mahmoud
>
> On Fri, Dec 18, 2015 at 11:39 AM, Mahmoud  wrote:

>> Dear Tim,
>>
>> I got it. Thank you for detailed explanation.
>>
>> Best, Mahmoud
>>
>> On Fri, Dec 18, 2015 at 11:29 AM, Timothy B. Brown
>>  wrote:

>>> Hi Mahmoud,
>>>
>>> Short answer:
>>>
>>> Because of directory structure conventions and file naming
>>> conventions used in the pipeline scripts, you cannot rename the
>>> subject directory and expect that the pipelines will run without (at
>>> least) also renaming a number of other files in the subject
>>> directory. So renaming the subject directory will break quite a
>>> number of things.
>>>
>>> Long answer:
>>>
>>> The FreeSurferPipeline in particular and the other pipeline scripts
>>> in general depend upon a directory structure for the data that has a
>>> root *study* folder in which each subdirectory of that *study*
>>> folder is named for the subject id.  So if the data you are
>>

Re: [HCP-Users] HCP Pipelines :DiffusionPreprocessing

2016-02-09 Thread Timothy B. Brown
t; Matt.
>>>>
>>>> From:  on behalf of Dev vasu
>>>>  Date: Monday, February 8,
>>>> 2016 at 10:55 AM To: "hcp-users@humanconnectome.org" >>> us...@humanconnectome.org> Subject: [HCP-Users] HCP Pipelines
>>>> :DiffusionPreprocessing
>>>>
>>>> Dear Professor,
>>>>

>>>> I would like to use DiffusionPreprocessing  Pipeline for my DTI
>>>> structural connectivity analysis, I have completed the tutorial on
>>>> how to use HCP Pipelines for Preprocessing, I have followed the
>>>> instruction and i have provided the path to study folder and
>>>> subject, when i run the  DiffusionPreprocessing.sh, i am incurring
>>>> following error
>>>>
>>>> "./DiffPreprocPipeline.sh: line 96: /global/scripts/log.shlib: No
>>>> such file or directory
>>>>
"
>>>>

>>>> Although the file log.shlib is found in global directory , I am
>>>> incurring the same error.
>>>>

>>>> Could you  please clarify me.
>>>>
>>>>

>>>> Thanks Vasudev
>>>> ___
>>>>
HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

>>>>

>>>>

>>>> The materials in this message are private and may contain Protected
>>>> Healthcare Information or other information of a sensitive nature.
>>>> If you are not the intended recipient, be advised that any
>>>> unauthorized use, disclosure, copying or the taking of any action
>>>> in reliance on the contents of this information is strictly
>>>> prohibited. If you have received this email in error, please
>>>> immediately notify the sender via telephone or return mail.

>>> ___
>>>
HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

>

>

> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

>

>

> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] HCP Pipelines :DiffusionPreprocessing

2016-02-15 Thread Timothy B. Brown
Vasudev,

Based on a subsequent message that you sent to this mailing list asking
for alternatives available when you do not have T1w or T2w images, I'm
guessing you may have already figured out what I'm about to write.  So
please forgive me if I'm simply writing what you already know.

First, there is not enough information in your message below to be able
to diagnose why you would be getting a "Segmentation fault".  The
Segmentation fault could be coming from something happening in your
modified batch script (PreFreeSurferPipelineBatch.mine.sh) or from the
PreFreeSurferPipeline.sh script that is invoked by the batch script or
(most likely) from one of the programs invoked by the
PreFreeSurferPipeline.sh script. The most likely reason for the
segmentation fault is that, as you indicate, you do not have any
structural images for the pipeline to process.

Second, based on your indication that you do not have any T1w or T2w
images, I would question the need or ability to run the
PreFreeSurferPipeline on your data at all. The main functions of this
pipeline use the structural (T1w and T2w) images to create a "native
space" for the subject by distortion correcting the structural images,
rigidly aligning them to the axes of the MNI space, etc. It also does an
initial brain extraction based on the structural images. Others on this
list are better qualified to provide more details than I, but, in short,
the main inputs to the PreFreeSurferPipeline.sh script (and the
Structural Preprocessing Pipeline of which the PreFreeSurferPipeline is
the initial step) *are the T1w and T2w images*.

I am not experienced enough to suggest alternatives for you when
structural images are not available. I will defer to others who may have
suggestions for alternatives.

Tim

On Sat, Feb 13, 2016, at 06:21, Dev vasu wrote:
> Dear Sir,
>

> When i start running PreFreeSurferPipeline I am incurring an error
> which says following :
>
>
> " vasudev@vasudev-OptiPlex-780:~/Documents/Pipelines-
> master/Examples/Scripts$ ./PreFreeSurferPipelineBatch.mine.sh
>
Segmentation fault (core dumped)"
>

> I have not acquired  T1w and T2w images, I just have DTI and resting
> state fMRI data along with Anatomical NIFTI file, could you please let
> me know why such an error is occurring.
>

> I have cross checked everything and i have set up the path accurately
> for running the Pipeline.
>
>

> Thanks, Vasudev
>
>

>
> On 9 February 2016 at 17:27, Dev vasu
>  wrote:

>> Dear Sir,
>>

>> Thanks for your mail, I will recheck everything and try if it works
>> out, I apologize if my repeated mails are annoying you, since i am
>> new to HCP workbench, I am facing some problems, earlier i use to
>> work on Neuroimaging subjects using SPM in Matlab R2013b ( windows 7
>> OS ), but for my current work, I have to establish all the study
>> protocols in workbench as part of my graduate studies. I am totally
>> new to this and i am trying my best to resolve my issues on my own,
>> If in case some problems arise i will contact your team, I thank HCP
>> community for being patient enough to respond to my questions.
>>
>>

>> Thanks Vasudev
>>
>> On 9 February 2016 at 17:18, Timothy B. Brown
>>  wrote:

>>> Hi Vasudev,
>>>
>>> As Matt points out, if you visit the Release Notes, Installation,
>>> and Usage Guide pointed (see Matt's link provided below), you should
>>> find a section under the heading Running the HCP Pipeline on example
>>> data. This, hopefully, should be a good starting point for
>>> understanding the environment variables that must be set correctly
>>> in order to run the HCP Pipeline Scripts.  As is mentioned in the
>>> document, we recommend setting the values for these environment
>>> variables in an "environment script" that is sourced before running
>>> any of the pipelines.
>>>
>>> As Tim Coalson points out, the error message you are getting when
>>> you try to run the Diffusion Preprocessing pipeline is coming from a
>>> line of code that expects the environment variable HCPPIPEDIR to be
>>> set to contain the path to the root directory at which you have
>>> installed the HCP Pipeline Scripts.
>>>
>>> That particular line of code looks like:
>>>
>>> source ${HCPPIPEDIR}/global/scripts/log.shlib
>>>
>>> The code is written with the expectation that the HCPPIPEDIR
>>> environment variable is set, and whatever value it is set to will be
>>> used in the actual command that is run. If the HCPPIPEDIR
>>> environment variable is not set, then the use of ${HCPPIPEDIR} in
>&g

Re: [HCP-Users] FSL numpy error while running Freesurfer pipeline

2016-03-10 Thread Timothy B. Brown
Vasudev,
 
It seems like you may need to provide the Python environment with some
indication of where to find the installation of numpy. I'm not sure why
this would be happening unless for some reason the Python installation
you are using when running the HCP Pipeline Scripts is different from
the Python installation that you would use if you simply entered python
at the command line.
 
Let's try the following.
 
First, you will need to know where numpy is installed on your system.
 
Based on the output from your apt-cache command, it looks to me like you are 
working on an Ubuntu system and have installed numpy with a command something 
like:
 
apt-get install python-numpy
 
If that is the case, you should be able to find out where numpy is installed by 
starting up Python from a terminal/shell and issuing commands similar to the 
following:
 
$ python
Python 2.7.9 (default, Apr  2 2015, 15:33:21)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> import imp
>>> imp.find_module('numpy')
(None, '/usr/lib/python2.7/dist-packages/numpy', ('', '', 5))

>>> quit()
$
 
The directory listed after "None," should be where numpy is installed on
you system.  In the above example, it is /usr/lib/python2.7/dist-
packages/numpy.
 
Once you have that information, try the following to see if we can get
numpy found when you are running PreFreeSurferPipeline.sh
 
Either in a script file that you source in order to set up your environment 
before running the PreFreeSurferPipeline.sh script, or in your ~/.bash_profile 
file set your PYTHONPATH environment variable with a command like the following:
 
export PYTHONPATH=/usr/lib/python2.7/dist-packages/numpy:${PYTHONPATH}
 
Of course, you will want to use the directory you found above in place
of /usr/lib/python2.7/dist-packages/numpy in the above command.
 
This form of setting the PYTHONPATH environment variable assumes you are
using the bash shell which is the default for most Linux environments.
If you do this in your ~/.bash_profile file, you should log out and log
back in to make sure that the PYTHONPATH environment variable gets set.
 
With the PYTHONPATH environment variable set, when your run the
PreFreeSurferPipeline.sh script and the Python environment is started in
order to run the aff2rigid command (which is a Python script), the
directories in your PYTHONPATH variable should be included in the list
of directories in which to look for items to import.  So the from numpy
import * statement in the Python script should then work.
 
Give this a try and let me know how things work out.
 
Tim


On Thu, Mar 10, 2016, at 13:33, Dev vasu wrote:
> Dear Sir,
>

> I have started running PreFreeSurferPipeline.sh  and every time i run
> , i am incurring following error,
>
> "START: ACPCAlignment
>
Final FOV is:
>
.00 160.00 0.00 256.00 86.00 150.00
>
>
Traceback (most recent call last):
>
File "/usr/share/fsl/5.0/bin/aff2rigid", line 75, in 
>
from numpy import *
>
ImportError: No module named numpy "
>
>

>
> Numpy is installed in my computer but the program could not
> import numpy.
>
>
> " vasudev@vasudev-OptiPlex-780:~/Documents/workbench/bin_linux64$ apt-
> cache policy python-numpy
>
python-numpy:
>
Installed: 1:1.8.2-0ubuntu0.1
>
Candidate: 1:1.8.2-0ubuntu0.1
>
Version table:
>
*** 1:1.8.2-0ubuntu0.1 0
>
500 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main
amd64 Packages
>
100 /var/lib/dpkg/status
>
1:1.8.1-1ubuntu1 0
>
500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages "
>
>

> Could you please let me know how shall i resolve it.
>
>
>

> Thanks
> Vasudev
>
>
>
>
>
>

> ___
>
HCP-Users mailing list
>
HCP-Users@humanconnectome.org
>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FSL numpy error while running Freesurfer pipeline

2016-03-10 Thread Timothy B. Brown
Hi again Vasudev,
 
>From what you've put in your message below, it looks to me like you may
be missing an important character in the following lines.
 
You have:
 
export FSLDIR=/usr/share/fsl/5.0
${FSLDIR}/etc/fslconf/fsl.sh
 
you should have:
 
export FSLDIR=/usr/share/fsl/5.0
. ${FSLDIR}/etc/fslconf/fsl.sh
 
Notice the "." at the beginning of the second line. That "." is very
important.
 
Please verify that it is there.  You will also need to verify that
/usr/share/fsl/5.0 is the correct directory in which you have FSL
installed.
 
If /usr/share/fsl/5.0 is where you have FSL installed, you do have the
"." at the beginning of the second line above, and you still get the
"Permission denied" error, then try issuing the following command and
sending the output to me.
 
$ ls -l /usr/share/fsl/5.0/etc/fslconf
 
Regards,
 
Tim


On Thu, Mar 10, 2016, at 14:41, Dev vasu wrote:
> Dear Sir,
>

>
Thanks for this information i will try it now, i also had another doubt
>
>

> In the script the path to  FSL directory suggested as follows

>
> " export FSLDIR=/usr/share/fsl/5.0
${FSLDIR}/etc/fslconf/fsl.sh
> and when i run the script i am incurring following error

>
> dev@dev-OptiPlex-780:~/Documents/Pipelines-master/PreFreeSurfer$
./PreFreeSurferPipeline.mine.sh

./PreFreeSurferPipeline.mine.sh: line
170: /usr/share/fsl/5.0/etc/fslconf/fsl.sh: Permission denied "
>

>

>
> Could you please let me know the reason for the error.


>

>
> Thanks
>

>
> Vasudev
>
> On 10 March 2016 at 21:36, Timothy B. Brown  wrote:
>> Vasudev,
>>
>> It seems like you may need to provide the Python environment with
>> some indication of where to find the installation of numpy. I'm not
>> sure why this would be happening unless for some reason the Python
>> installation you are using when running the HCP Pipeline Scripts is
>> different from the Python installation that you would use if you
>> simply entered python at the command line.
>>
>> Let's try the following.
>>
>> First, you will need to know where numpy is installed on your system.
>>
>> Based on the output from your apt-cache command, it looks to me like you are 
>> working on an Ubuntu system and have installed numpy with a command 
>> something like:
>>
>> apt-get install python-numpy
>>
>> If that is the case, you should be able to find out where numpy is installed 
>> by starting up Python from a terminal/shell and issuing commands similar to 
>> the following:
>>
>> $ python
>> Python 2.7.9 (default, Apr  2 2015, 15:33:21)
>> [GCC 4.9.2] on linux2
>> Type "help", "copyright", "credits" or "license" for more
>> information.
>>
>>> import imp
>> >>> imp.find_module('numpy')
>> (None, '/usr/lib/python2.7/dist-packages/numpy', ('', '', 5))
>>
>>> quit()
>> $
>>
>> The directory listed after "None," should be where numpy is installed
>> on you system.  In the above example, it is /usr/lib/python2.7/dist-
>> packages/numpy.
>>
>> Once you have that information, try the following to see if we can
>> get numpy found when you are running PreFreeSurferPipeline.sh
>>
>> Either in a script file that you source in order to set up your environment 
>> before running the PreFreeSurferPipeline.sh script, or in your 
>> ~/.bash_profile file set your PYTHONPATH environment variable with a command 
>> like the following:
>>
>> export PYTHONPATH=/usr/lib/python2.7/dist-packages/numpy:${PYTHONPATH}
>>
>> Of course, you will want to use the directory you found above in
>> place of /usr/lib/python2.7/dist-packages/numpy in the above command.
>>
>> This form of setting the PYTHONPATH environment variable assumes you
>> are using the bash shell which is the default for most Linux
>> environments.  If you do this in your ~/.bash_profile file, you
>> should log out and log back in to make sure that the PYTHONPATH
>> environment variable gets set.
>>
>> With the PYTHONPATH environment variable set, when your run the
>> PreFreeSurferPipeline.sh script and the Python environment is started
>> in order to run the aff2rigid command (which is a Python script), the
>> directories in your PYTHONPATH variable should be included in the
>> list of directories in which to look for items to import.  So the
>> from numpy import * statement in the Python script should then work.
>>
>> Give this a try and let me know how things work out.
>>
>> Tim
>>
>

Re: [HCP-Users] FSL numpy error while running Freesurfer pipeline

2016-03-10 Thread Timothy B. Brown
 
That would imply that you *don't* actually have numpy installed.
 
I am not sure why your apt-cache command (apt-cache policy python-numpy)
would seem to be reporting that python-numpy *is* installed.
 
I would suggest that you re-install numpy with commands like the following:
 
$ apt-get remove python-numpy
$ apt-get install python-numpy
 
then try firing up python and issuing the:
 
>>> import imp
>>> imp.find_module('numpy')
 
commands again.
 
Tim
 
On Thu, Mar 10, 2016, at 15:05, Dev vasu wrote:
> Dear Sir,
>

> I have tried finding the path to numpy directory but it seems there is
> no numpy installed
>
>
" imp.find_module('numpy')
> Traceback (most recent call last):
>
File "", line 1, in 
>
ImportError: No module named numpy "
>
>

> I have searched in python 2.7/dist-packages i could only find nipype .
>
>

> Thanks Vasudev
>
> On 10 March 2016 at 21:54, Timothy B. Brown  wrote:
>> Hi again Vasudev,
>>
>> From what you've put in your message below, it looks to me like you
>> may be missing an important character in the following lines.
>>
>> You have:
>>
>> export FSLDIR=/usr/share/fsl/5.0
>> ${FSLDIR}/etc/fslconf/fsl.sh
>>
>> you should have:
>>
>> export FSLDIR=/usr/share/fsl/5.0
>> . ${FSLDIR}/etc/fslconf/fsl.sh
>>
>> Notice the "." at the beginning of the second line. That "." is very
>> important.
>>
>> Please verify that it is there.  You will also need to verify that
>> /usr/share/fsl/5.0 is the correct directory in which you have FSL
>> installed.
>>
>> If /usr/share/fsl/5.0 is where you have FSL installed, you do have
>> the "." at the beginning of the second line above, and you still get
>> the "Permission denied" error, then try issuing the following command
>> and sending the output to me.
>>
>> $ ls -l /usr/share/fsl/5.0/etc/fslconf
>>
>> Regards,
>>
>> Tim
>>
>> On Thu, Mar 10, 2016, at 14:41, Dev vasu wrote:
>>> Dear Sir,
>>>

>>>
Thanks for this information i will try it now, i also had another doubt
>>>
>>>

>>> In the script the path to  FSL directory suggested as follows

>>>
>>> " export FSLDIR=/usr/share/fsl/5.0
${FSLDIR}/etc/fslconf/fsl.sh
>>> and when i run the script i am incurring following error

>>>
>>> dev@dev-OptiPlex-780:~/Documents/Pipelines-master/PreFreeSurfer$
./PreFreeSurferPipeline.mine.sh

./PreFreeSurferPipeline.mine.sh: line
170: /usr/share/fsl/5.0/etc/fslconf/fsl.sh: Permission denied "
>>>

>>>

>>>
>>> Could you please let me know the reason for the error.


>>>

>>>
>>> Thanks
>>>

>>>
>>> Vasudev
>>>
>>> On 10 March 2016 at 21:36, Timothy B. Brown  wrote:
>>>> Vasudev,
>>>>
>>>> It seems like you may need to provide the Python environment with
>>>> some indication of where to find the installation of numpy. I'm not
>>>> sure why this would be happening unless for some reason the Python
>>>> installation you are using when running the HCP Pipeline Scripts is
>>>> different from the Python installation that you would use if you
>>>> simply entered python at the command line.
>>>>
>>>> Let's try the following.
>>>>
>>>> First, you will need to know where numpy is installed on your
>>>> system.
>>>>
>>>> Based on the output from your apt-cache command, it looks to me like you 
>>>> are working on an Ubuntu system and have installed numpy with a command 
>>>> something like:
>>>>
>>>> apt-get install python-numpy
>>>>
>>>> If that is the case, you should be able to find out where numpy is 
>>>> installed by starting up Python from a terminal/shell and issuing commands 
>>>> similar to the following:
>>>>
>>>> $ python
>>>> Python 2.7.9 (default, Apr  2 2015, 15:33:21)
>>>> [GCC 4.9.2] on linux2
>>>> Type "help", "copyright", "credits" or "license" for more
>>>> information.
>>>>
>>> import imp
>>>> >>> imp.find_module('numpy')
>>>> (None, '/usr/lib/python2.7/dist-packages/numpy', ('', '', 5))
>>>>
>>> quit()
>>>> $
>>>>
>>>> The directory listed af

Re: [HCP-Users] PostFreesufer script : Atlas transform and Inverse Atlas Transform

2016-03-19 Thread Timothy B. Brown
Vasudev,
 
Yes, that is very likely to be the problem.
 
In general, if a part of the pipeline fails, it is **very likely** that
it will not generate the necessary files for the next step.
 
If you have not been able to successfully run all of the Pre-FreeSurfer
pipeline, I would not expect the necessary files to be available in
order to run the next major step which is the FreeSurfer pipeline.
 
Tim


On Thu, Mar 17, 2016, at 12:47, Dev vasu wrote:
> Dear Sir,
>
>

> After running Prefreesuferpipeline.sh my script does not generate
> "acpc_dc2standard" and  "acpc_dc_restore" is it because my
> PreFreesurferpipeline.sh got terminated in middle, since i could not
> do gradient unwarping ?.
>
>

> Thanks
> Vasudev
>
>
> On 17 March 2016 at 18:41, Timothy B. Brown  wrote:
>> Vasudev,
>>
>> > I would like to know if the  "acpc_dc2standard"  and
>> > "standard2acpc_dc" atlas was provided by HCP, or whether they are
>> > output generated from Freesurfer.sh script,
>>
>> The file acpc_dc2standard.nii.gz is generated in the
>> AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script which is invoked by
>> the PreFreeSurferPipeline.sh script.
>> It is subsequently used in the PostFreeSurferPipeline.sh script and
>> in the GenericfMRIVolumeProcessingPipeline.sh script.
>>
>> If you search in your pipelines directory for acpc_dc2standard in all files, 
>> you should see something like:
>>
>> [HCPpipeline@login01 Pipelines-3.6.0]$ find . -exec grep --with-
>> filename "acpc_dc2standard" {} \;
>>
./PreFreeSurfer/PreFreeSurferPipeline.sh:    --
owarp=${AtlasSpaceFolder}/xfms/acpc_dc2standard.nii.gz \
>>
./PostFreeSurfer/PostFreeSurferPipeline.sh:AtlasTransform="acpc_dc-
2standard"
>>
./fMRIVolume/GenericfMRIVolumeProcessingPipeline.sh:AtlasTransform="acp-
c_dc2standard"
>>
[HCPpipeline@login01 Pipelines-3.6.0]$
>>
>> The file standard2acpc_dc.nii.gz file is similarly generated in the
>> AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script and subsequently
>> used in the PostFreeSurferPipeline.sh script.
>>
>> > also could you please let
>> > me know if the "acpc_dc_restore" files which is used as naming
>> > convention in Postfreesurfer script are outputs generated by
>> > freesurfer.sh ?.
>>
>> The files named *acpc_dc_restore* are generated and used by the
>> PreFreeSurferPipeline.sh script. They are also used as inputs to
>> several subsequent pipelines including FreeSurferPipeline.sh,
>> PostFreeSurferPipeline.sh, and
>> GenericfMRIVolumeProcessingPipeline.sh.
>>
>> Regards,
>>
>> Tim
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any
>> unauthorized use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify
>> the sender via telephone or
>> return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] PrefreesuferPipeline :Field Map Preprocessing and Gradient Unwarping

2016-03-19 Thread Timothy B. Brown
Vasudev,
 
On 14 Mar 2016 at 12:13, Dev vasu  wrote:
> When i run Prefreesurfer pipeline, the execution fails at  the start of  
> :Field Map Preprocessing and Gradient Unwarping in 
> T2WToT1wDistortionCorrectAndReg.sh following is the error that i
> am getting
>
> "  START: T2WToT1wDistortionCorrectAndReg.sh
>
> START: Field Map Preprocessing and Gradient Unwarping
>
> Cannot open volume -Tmean for reading!
>
> Could you please clarify me about this error message.
 
Based on the error messages you are showing above, it is very likely
that there is a malformed or missing command line argument somewhere in
your call to the PreFreeSurferPipeline.sh script.
 
For example, the PreFreeSurferPipeline.sh script calls the 
T2wToT1wDistortionCorrectAndReg.sh script which in turn *might* call the 
SiemensFieldMapPreprocessingAll.sh script (depending upon the exact version of 
the HCP Pipeline Scripts you are using and the command line input parameters 
that you give it).  In the SiemensFieldMapPreprocessingAll.sh script, there is 
a message output that looks like:
 
echo " START: Field Map Preprocessing and Gradient Unwarping"
 
This appears to be the last message you get before the error.
 
Shortly after the output of the "START: ..." message, there is a call to 
fslmaths that looks like:
 
${FSLDIR}/bin/fslmaths ${MagnitudeInputName} -Tmean ${WD}/Magnitude
 
Upon reaching that call to fslmaths, if the variable MagnitudeInputName is 
empty, then the actual command that will issued will look something like:
 
${FSLDIR}/bin/fslmaths -Tmean ${WD}/Magnitude
 
Note that fslmaths is expecting as its first argument the name of an input 
volume file.  In the above case, it has been given -Tmean as its first 
argument.  So fslmaths will try to open a volume file named "-Tmean".  Which 
is, of course, wrong. There will not be a file named -Tmean.  Thus, you get the 
error message: Cannot open volume -Tmean for reading!
 
In fact, you can duplicate the error message by issuing an fslmaths
command with -Tmean as its first argument.
 
[~]$ fslmaths -Tmean

Cannot open volume -Tmean for reading!

[~]$
 
So, as Matt Glasser indicated in a previous reply, you will need to post
the exact text of your call to the PreFreeSurferPipeline.sh script in
order for anyone to be able to provide any more information other than
the guess that I've made above.  You will also need to include the
specification of the exact version of the HCP Pipeline Scripts you are
using (e.g. v3.4.0, v3.6.1, v3.9.0, etc.) because there may be
differences in what scripts are invoked by the PreFreeSurferPipeline.sh
script based on the version of the HCP Pipeline Scripts in use.
 
Sorry for the long winded nature of this reply, but I am hoping that it will:
 1. Help you and others on the HCP-Users list further along the path of
being able to diagnose and debug such problems, and
 2. Help you understand why Matt was requesting that you post your call
to the PreFreeSurfer pipeline.
Regards,
 
Tim
 
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Freesuferpipeline : FreeSurferHiresPial - Segmentation fault

2016-03-19 Thread Timothy B. Brown
d "
>
 
 
What you have provided does not appear to be the full standard output
(or full standard error) from a run of PreFreeSurferPipeline.sh.
 
For example, I was hoping to see the output log messages that come very
early in the script that report on such things as all the variables that
are set by command line arguments and some of the relevant environment
variable settings.  There are also subsequent log messages that should
be in the standard output that would tell me things such as whether you
are "Performing Gradient Nonlinearity Correction" or "NOT PERFORMING
GRADIENT DISTORTION CORRECTION",  "Performing ... Readout Distortion
Correction" or "NOT PERFORMING READOUT DISTORTION CORRECTION",
"Performing Bias Field Correction", etc.
 
All of these messages should be included in the standard output
generated by a run of PreFreeSurferPipeline.sh.  These would help us
determine what is really happening before the point at which your run of
the FreeSurferPipeline.sh script aborts.
 
The full standard output will generally be a fairly large set of text and to 
get it you will need to run PreFreeSurferPipeline.sh and capture the stdout and 
stderr.  Please see:
 
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html
 
for an introduction to redirecting stdout and stderr to files.
 
**However**, before you try to capture and send the stdout and stderr,
you really need to do the following 4 things.
 1. Install and use a released version of the HCP Pipeline Scripts (not
a "Development" version).
 2. Install and make sure you are using the required version of
FreeSurfer.
 3. Do not use a modified version of PreFreeSurferPipeline.sh, instead
make use of the command line arguments that are available as part of
that script by starting with the
Examples/Scripts/PreFreeSurferPipelineBatch.sh and setting up the
environment variables in a script like
Examples/Scripts/SetUpHCPPipeline.sh.
 4. Similarly, do not use a modified version of FreeSurferPipeline.sh,
instead use the same approach as I've recommended for the
PreFreeSurferPipeline.sh script. There is an example "batch" script
for you to use as a starting point for running the
FreeSurferPipeline.sh script also.
 
Tim
 
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] PostFreesufer script : Atlas transform and Inverse Atlas Transform

2016-03-19 Thread Timothy B. Brown
Vasudev,
 
> I would like to know if the  "acpc_dc2standard"  and
> "standard2acpc_dc" atlas was provided by HCP, or whether they are
> output generated from Freesurfer.sh script,
 
The file acpc_dc2standard.nii.gz is generated in the
AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script which is invoked by
the PreFreeSurferPipeline.sh script.
It is subsequently used in the PostFreeSurferPipeline.sh script and in
the GenericfMRIVolumeProcessingPipeline.sh script.
 
If you search in your pipelines directory for acpc_dc2standard in all files, 
you should see something like:
 
[HCPpipeline@login01 Pipelines-3.6.0]$ find . -exec grep --with-filename
"acpc_dc2standard" {} \;

./PreFreeSurfer/PreFreeSurferPipeline.sh:    --
owarp=${AtlasSpaceFolder}/xfms/acpc_dc2standard.nii.gz \

./PostFreeSurfer/PostFreeSurferPipeline.sh:AtlasTransform="acpc_dc-
2standard"

./fMRIVolume/GenericfMRIVolumeProcessingPipeline.sh:AtlasTransform="acp-
c_dc2standard"

[HCPpipeline@login01 Pipelines-3.6.0]$
 
The file standard2acpc_dc.nii.gz file is similarly generated in the
AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script and subsequently used
in the PostFreeSurferPipeline.sh script.
 
> also could you please let
> me know if the "acpc_dc_restore" files which is used as naming
> convention in Postfreesurfer script are outputs generated by
> freesurfer.sh ?.
 
The files named *acpc_dc_restore* are generated and used by the
PreFreeSurferPipeline.sh script. They are also used as inputs to several
subsequent pipelines including FreeSurferPipeline.sh,
PostFreeSurferPipeline.sh, and GenericfMRIVolumeProcessingPipeline.sh.
 
Regards,
 
Tim
 
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Freesuferpipeline : FreeSurferHiresPial - Segmentation fault

2016-03-19 Thread Timothy B. Brown
Please provide the following:
 * The version number of the HCP Pipeline Scripts you are using
   * Go to your HCP Pipelines Scripts directory (e.g. 
/home/vasudev/Documents/Pipelines-
 master) and issue the command: cat version.txt and send the
 output from that command
 * The version of FreeSurfer you are using
   * Issue the following command: cat ${FREESURFER_HOME}/build-
 stamp.txt and send the output from that command
   * You should source the environment script you are using when running
 the HCP Pipeline Scripts to get the FREESURFER_HOME environment
 variable set correctly prior to issuing the above command (e.g.
 source **)
 * The path to the installed version of mris_make_surfaces you are using
   * Try issuing the command: which mris_make_surfaces
   * Similar to the above, please make sure you have sourced the
 environment set up script prior to issuing the which
 mris_make_surfaces command
 * The output from the following command:
   * mris_make_surfaces --all-info
 * Exact text of your invocation of the PreFreeSurferPipeline.sh script.
 * The captured standard output (stdout) and standard error (stderr)
   from your successful run of the PreFreeSurferPipeline.sh script.
 * Exact text of your invocation of the FreeSurferPipeline.sh script.
 * The captured standard output (stdout) and standard error (stderr)
   from you run of the FreeSurferPipeline.sh script.
Tim


On Fri, Mar 18, 2016, at 06:06, Dev vasu wrote:
> Dear Sir,
>

> I am able to run PreFreesurferPipeline.sh completely but i am still
> incurring the same error message which states as :
>
> "/home/vasudev/Documents/Pipelines-
> master/FreeSurfer/scripts/FreeSurferHiresPial.sh: line 48: 24890
> Segmentation fault  (core dumped) mris_make_surfaces -nsigma_above
> 2 -nsigma_below 3 -aseg aseg.hires -filled filled.hires -wm wm.hires
> -mgz -sdir $SubjectDIR -orig white.deformed -nowhite -orig_white
> white.deformed -orig_pial pial -T2dura "$mridir"/T2w_hires.norm -T1
> T1w_hires.norm -output .T2 $SubjectID lh "
>
>

> Thanks
> Vasudev
>
> On 17 March 2016 at 18:45, Timothy B. Brown  wrote:
>> Vasudev,
>>
>> > I am incurring segmentation fault error after during the course of
>> > running Freesurferpipeline
>> >
>> > Following is the error
>> >
>> > /home/vasudev/Documents/Pipelines-
>> > master/FreeSurfer/scripts/FreeSurferHiresPial.sh: line 48: 24964
>> > Segmentation fault  (core dumped) mris_make_surfaces -
>> > nsigma_above
>> > 2 -nsigma_below 3 -aseg aseg.hires -filled filled.hires -wm
>> > wm.hires
>> > -mgz -sdir $SubjectDIR -orig white.deformed -nowhite -orig_white
>> > white.deformed -orig_pial pial -T2dura "$mridir"/T2w_hires.norm -T1
>> > T1w_hires.norm -output .T2 $SubjectID lh "
>> >
>> > I am even attaching the log files generated during segmentation
>> > process
>> > ,kindly let me know the reason for this error
>>
>> I was unable to see anything in the log files that you attached that
>> would help me understand why you are getting a segmentation fault.
>>
>> Have you previously been able to successfully run the
>> PreFreeSurferPipeline.sh script?  Or are you still having the
>> problems with that mentioned in your previous messages?
>>
>> If you haven't yet successfully run the Pre-FreeSurfer Pipeline, then
>> I would suggest you get that to complete without problems before
>> trying to run the FreeSurfer pipeline.
>>
>> Regards,
>>
>> Tim
>>
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any
>> unauthorized use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify
>> the sender via telephone or
>> return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Freesuferpipeline : FreeSurferHiresPial - Segmentation fault

2016-03-19 Thread Timothy B. Brown
Vasudev,
 
> I am incurring segmentation fault error after during the course of
> running Freesurferpipeline
>
> Following is the error
>
> /home/vasudev/Documents/Pipelines-
> master/FreeSurfer/scripts/FreeSurferHiresPial.sh: line 48: 24964
> Segmentation fault  (core dumped) mris_make_surfaces -nsigma_above
> 2 -nsigma_below 3 -aseg aseg.hires -filled filled.hires -wm wm.hires
> -mgz -sdir $SubjectDIR -orig white.deformed -nowhite -orig_white
> white.deformed -orig_pial pial -T2dura "$mridir"/T2w_hires.norm -T1
> T1w_hires.norm -output .T2 $SubjectID lh "
>
> I am even attaching the log files generated during segmentation
> process
> ,kindly let me know the reason for this error
 
I was unable to see anything in the log files that you attached that
would help me understand why you are getting a segmentation fault.
 
Have you previously been able to successfully run the
PreFreeSurferPipeline.sh script?  Or are you still having the problems
with that mentioned in your previous messages?
 
If you haven't yet successfully run the Pre-FreeSurfer Pipeline, then I
would suggest you get that to complete without problems before trying to
run the FreeSurfer pipeline.
 
Regards,
 
Tim
 
 
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] PrefreesuferPipeline :Field Map Preprocessing and Gradient Unwarping

2016-03-19 Thread Timothy B. Brown
In this case, it would be *much better* to use the command line options
that are already available for the case in which you do not have field
map files rather than making any modifications (including commenting out
code) in the core scripts. I think things will go much smoother for you
if you follow this intended path.
 
I apologize if the fact that this was the intended path wasn't as clear
as it should have been. Thank you for bringing this up on the HCP-Users
list.  I hope that by going through this question and answer process
openly, we have saved a number of others from similar frustrations.
 
Tim
 
On Thu, Mar 17, 2016, at 15:47, Dev vasu wrote:
> Dear Sir,
>

> I will look into the suggestions, i would like to highlight one
> important aspect, I do not have  field map acquisition files , for
> this reason i did not provide any file path for variable
> "MagnitudeInputName" in SeimensFieldMapProcessingAll.sh, Can i
> possibly comment out the necessary sections of code to avoid
> generating Magnitude_brain and Magnitude_brain_mask images
>
>

> Thanks Vasudev
>
> On 17 March 2016 at 21:37, Timothy B. Brown  wrote:
>> Hi Vasudev,
>>
>> What you attached was a modified version of the
>> PreFreeSurferPipeline.sh script.  You did not include the command
>> line you actually use to invoke your changed version of the script.
>> But before you send the command line you use to invoke your script, I
>> need to point out that the fact that you modified the
>> PreFreeSurferPipeline.sh script itself seems to indicate that there
>> is a fairly fundamental misunderstanding about how the HCP Pipeline
>> Scripts are intended to be used and customized for your situation.
>>
>> I would *very strongly* suggest that you should not be modifying any
>> scripts in the HCP Pipelines Scripts other than those in the
>> Examples/Scripts directory.  The Examples/Scripts directory contains
>> example "batch" scripts that are intended to illustrate how to invoke
>> the actual HCP Pipeline Scripts to process your data.  For example,
>> the file PreFreeSurferPipelineBatch.sh provides an example of how to
>> invoke the PreFreeSurferPipeline.sh script.  You should create a copy
>> of the PreFreeSurferPipelineBatch.sh script (e.g.
>> PreFreeSurferPipeline.Batch.mine.sh) and make modifications in that
>> file for your situation.
>>
>> There is also a file named SetUpHCPPipeline.sh that is an example
>> environment set up script (a.k.a. "environment script") for running
>> the HCP Pipeline Scripts. It is in this script (or your own copy of
>> it) that you should set environment variables to indicate such things
>> as where FSL is installed on your system (FSLDIR), where FreeSurfer
>> is installed on your system (FREESURFER_HOME), where the HCP Pipeline
>> Scripts are installed on your system (HCPPIPEDIR), etc.
>>
>> The "batch script" sources the "environment script" to set up all the
>> environment variables for running the pipeline scripts.
>>
>> Modifying the PreFreeSurferPipeline.sh is intended to be done when
>> there is a need to modify the fundamental behaviors of this
>> particular phase of the HCP Pipelines. You should not be modifying
>> the PreFreeSurferPipeline.sh script (or your
>> PreFreeSurferPipeline.mine.sh script) to set up environment variables
>> or change the values of command line arguments that are used to
>> specify the files you want the scripts to work on or the optional
>> mechanisms for doing that work.
>>
>> Not that this could not be made to work, but it makes it
>> significantly harder for anyone to help you to debug things when you
>> are modifying the core scripts instead of modifying the example
>> scripts that were intended to be modified or at least used as an
>> example for creating your own scripts that *call* the core scripts.
>>
>> You should not be modifying the parts of the PreFreeSurferPipeline.sh
>> script that get and set command line arguments (the calls to
>> opts_GetOpt1 right after the message "Parsing Command Line Options".)
>> This not only will make helping you more difficult, it should not be
>> necessary at all.  Every one of those options is specifiable from the
>> command line used to invoke the PreFreeSurferPipeline.sh script as is
>> shown in the PreFreeSurferPipelineBatch.sh script.
>>
>> Please have a look at the "batch" scripts and the "environment
>> script" in the Examples/Scripts directory and read the documentation
>> comments in those files. There is definitely room for improvement in
>> tho

Re: [HCP-Users] PostFreesufer script : Atlas transform and Inverse Atlas Transform

2016-03-19 Thread Timothy B. Brown
Vasudev,
 
Please look at the comments in the PreFreeSurferPipelineBatch.sh script in the 
Examples/Scripts directory. In particular, look at the comments that are headed 
with "Readout Distortion Correction:"  If you do not have field maps to use to 
do readout distortion correction, then you need to make sure the 
--avgrdcmethod= option used on the command line to invoke the 
PreFreeSurferPipeline.sh script is set to --avgrdcmethod=NONE. 
 
In the modified version of PreFreeSurferPipeline.sh that you previously
sent, you have set things up so that the --avgrdcmethod= option is
completely ignored and set the AvgrdcSTRING variable to indicate that
you want to use Siemens field maps for readout distortion correction
 
AvgrdcSTRING=${FIELDMAP_METHOD_OPT}
 
When the AvgrdcSTRING value is set this way, the
PreFreeSurferPipeline.sh script will try to do readout distortion
correction using the specified Magnitude (--fmapmag=) and Phase (--
fmapphase=) field map files.
 
You've left the code that retrieves these two options alone.
 
MagnitudeInputName=`opts_GetOpt1 "--fmapmag" $@`

PhaseInputName=`opts_GetOpt1 "--fmapphase" $@`
 
and since you say you do not have field map files, I would expect that
you are not specifying those options on the command line when you invoke
your script.
 
Therefore, the MagnitudeInputName and PhaseInputName variables are
getting empty string values.
 
Having MagnitudeInputName be an empty string and having the options set
to use Siemens field maps would cause that empty string value to get
passed all the way down to the SiemensFieldMapPreprocessingAll.sh
script. This would cause the MagnitudeInputName variable in the
SiemensFieldMapPreprocessingAll.sh script to be an empty string and
cause the problem with fslmaths trying to open up a file named -Tmean.
 
Hope this is helpful.
 
Please consider making your modifications to the "batch" script instead
of a script that provides core functionality of the HCP Pipelines.
 
Tim
 
On Thu, Mar 17, 2016, at 13:32, Dev vasu wrote:
> Dear Sir,
>

> For the subjects that i use , We have not done any Field map
> acquisition.
>
>

> Thanks,
> Vasudev
>
>

>
> On 17 March 2016 at 19:01, Timothy B. Brown  wrote:
>> Vasudev,
>>
>> Yes, that is very likely to be the problem.
>>
>> In general, if a part of the pipeline fails, it is very likely that
>> it will not generate the necessary files for the next step.
>>
>> If you have not been able to successfully run all of the Pre-
>> FreeSurfer pipeline, I would not expect the necessary files to be
>> available in order to run the next major step which is the FreeSurfer
>> pipeline.
>>
>> Tim
>>
>> On Thu, Mar 17, 2016, at 12:47, Dev vasu wrote:
>>> Dear Sir,
>>>
>>>

>>> After running Prefreesuferpipeline.sh my script does not generate
>>> "acpc_dc2standard" and  "acpc_dc_restore" is it because my
>>> PreFreesurferpipeline.sh got terminated in middle, since i could not
>>> do gradient unwarping ?.
>>>
>>>

>>> Thanks
>>> Vasudev
>>>
>>>
>>> On 17 March 2016 at 18:41, Timothy B. Brown  wrote:
>>>> Vasudev,
>>>>
>>>> > I would like to know if the  "acpc_dc2standard"  and
>>>> > "standard2acpc_dc" atlas was provided by HCP, or whether they are
>>>> > output generated from Freesurfer.sh script,
>>>>
>>>> The file acpc_dc2standard.nii.gz is generated in the
>>>> AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script which is invoked
>>>> by the PreFreeSurferPipeline.sh script.
>>>> It is subsequently used in the PostFreeSurferPipeline.sh script and
>>>> in the GenericfMRIVolumeProcessingPipeline.sh script.
>>>>
>>>> If you search in your pipelines directory for acpc_dc2standard in all 
>>>> files, you should see something like:
>>>>
>>>> [HCPpipeline@login01 Pipelines-3.6.0]$ find . -exec grep --with-
>>>> filename "acpc_dc2standard" {} \;
>>>>
./PreFreeSurfer/PreFreeSurferPipeline.sh:    --
owarp=${AtlasSpaceFolder}/xfms/acpc_dc2standard.nii.gz \
>>>>
./PostFreeSurfer/PostFreeSurferPipeline.sh:AtlasTransform="acpc_dc-
2standard"
>>>>
./fMRIVolume/GenericfMRIVolumeProcessingPipeline.sh:AtlasTransform="acp-
c_dc2standard"
>>>>
[HCPpipeline@login01 Pipelines-3.6.0]$
>>>>
>>>> The file standard2acpc_dc.nii.gz file is similarly generated in the
>>>> AtlasRegistrationToMNI152_FLIRTandFNIRT.sh script and subsequently
>>>> used in the PostFreeSurferPipe

Re: [HCP-Users] PrefreesuferPipeline :Field Map Preprocessing and Gradient Unwarping

2016-03-20 Thread Timothy B. Brown
Hi Vasudev,
 
What you attached was a modified version of the PreFreeSurferPipeline.sh
script.  You did not include the command line you actually use to invoke
your changed version of the script.  But before you send the command
line you use to invoke your script, I need to point out that the fact
that you modified the PreFreeSurferPipeline.sh script itself seems to
indicate that there is a fairly fundamental misunderstanding about how
the HCP Pipeline Scripts are intended to be used and customized for your
situation.
 
I would *very strongly* suggest that you should not be modifying any
scripts in the HCP Pipelines Scripts other than those in the
Examples/Scripts directory.  The Examples/Scripts directory contains
example "batch" scripts that are intended to illustrate how to invoke
the actual HCP Pipeline Scripts to process your data.  For example, the
file PreFreeSurferPipelineBatch.sh provides an example of how to invoke
the PreFreeSurferPipeline.sh script.  You should create a copy of the
PreFreeSurferPipelineBatch.sh script (e.g.
PreFreeSurferPipeline.Batch.mine.sh) and make modifications in that file
for your situation.
 
There is also a file named SetUpHCPPipeline.sh that is an example
environment set up script (a.k.a. "environment script") for running the
HCP Pipeline Scripts. It is in this script (or your own copy of it) that
you should set environment variables to indicate such things as where
FSL is installed on your system (FSLDIR), where FreeSurfer is installed
on your system (FREESURFER_HOME), where the HCP Pipeline Scripts are
installed on your system (HCPPIPEDIR), etc.
 
The "batch script" sources the "environment script" to set up all the
environment variables for running the pipeline scripts.
 
Modifying the PreFreeSurferPipeline.sh is intended to be done when there
is a need to modify the fundamental behaviors of this particular phase
of the HCP Pipelines. You should not be modifying the
PreFreeSurferPipeline.sh script (or your PreFreeSurferPipeline.mine.sh
script) to set up environment variables or change the values of command
line arguments that are used to specify the files you want the scripts
to work on or the optional mechanisms for doing that work.
 
Not that this could not be made to work, but it makes it significantly
harder for anyone to help you to debug things when you are modifying the
core scripts instead of modifying the example scripts that were intended
to be modified or at least used as an example for creating your own
scripts that *call* the core scripts.
 
You should not be modifying the parts of the PreFreeSurferPipeline.sh
script that get and set command line arguments (the calls to
opts_GetOpt1 right after the message "Parsing Command Line Options".)
This not only will make helping you more difficult, it should not be
necessary at all.  Every one of those options is specifiable from the
command line used to invoke the PreFreeSurferPipeline.sh script as is
shown in the PreFreeSurferPipelineBatch.sh script.
 
Please have a look at the "batch" scripts and the "environment script"
in the Examples/Scripts directory and read the documentation comments in
those files. There is definitely room for improvement in those comments,
and I'm open to suggested improvements and revisions in the comments.
But I hope that they at least form a pretty good starting point.
 
Also, if you visit the following web page 
http://www.humanconnectome.org/courses/2015/exploring-the-human-connectome.php, 
scroll down to the section labelled "Course Schedule and Resources", then find 
and select the link for Practical 2 on Day 1 
(Day1_Practical2_wb_command&PipelinesI.pdf), you'll get a PDF document that I 
hope will be helpful.  In particular, within that PDF document, start reading 
on Page 6 in the section titled "Part2. HCP Pipelines I".
 
Tim
 
On Thu, Mar 17, 2016, at 13:06, Dev vasu wrote:
> Dear Sir,
>

> I am sending the file with command line options along with paths that
> i allocated  in PreFreesurferPipeline.sh. I am reviewing the file
> again and checking the options that i have set . If you could
> recognize some thing which i cannot could you please correct me
>

> Thanks Vasudev
>
> On 17 March 2016 at 18:38, Timothy B. Brown  wrote:
>> Vasudev,
>>
>> On 14 Mar 2016 at 12:13, Dev vasu  
>> wrote:
>> > When i run Prefreesurfer pipeline, the execution fails at  the start of  
>> > :Field Map Preprocessing and Gradient Unwarping in 
>> > T2WToT1wDistortionCorrectAndReg.sh following is the error that i
>> > am getting
>> >
>> > "  START: T2WToT1wDistortionCorrectAndReg.sh
>> >
>> > START: Field Map Preprocessing and Gradient Unwarping
>> >
>> > Cannot open volume -Tmean for reading!
>> >
>> > Could yo

Re: [HCP-Users] installation of libnetcdf.so.6 : Pial surface location

2016-03-24 Thread Timothy B. Brown
Hi Vasudev,

 It appears based on the link that you supplied
 (http://packages.ubuntu.com/precise/i386/libnetcdf6/download) that you
 have downloaded and installed the 32-bit version of the libnetcdf.so.6
 library.  The pipelines would not be expected to run under 32-bit
 Linux, so I'm betting you are actually using 64-bit Ubuntu Linux.
 
See the answers to the question at
http://askubuntu.com/questions/41332/how-do-i-check-if-i-have-a-32-bit-or-a-64-bit-os
for techniques to determine whether you are using 32 or 64 bit Ubuntu.
 
Assuming that you are using 64-bit Ubuntu as you should be, then you
will want the library available at
 
http://packages.ubuntu.com/precise/amd64/libnetcdf6/download
 
Note the amd64 instead of i386 in the URL.
 
Once you have installed the 64-bit version of the library, if things
still aren't working, then please issue the following commands and send
the output:
 
$ cd /usr/lib
$ ls -l libn*
 
This is to check your placement and configuration of the
libnetcdf* files.
 
Tim
 
On Thu, Mar 24, 2016, at 12:25, Dev vasu wrote:
> Dear Sir,
>
>  While running the pipeline i am getting following error "

>  "mris_make_surfaces: error while loading shared libraries:
>  libnetcdf.so.6: cannot open shared object file: No such file or
>  directory "

>

>  I have seen that this question was asked before and you have
>  suggested an answer here

> http://www.mail-archive.com/hcp-users@humanconnectome.org/msg00782.html

>

>  I have manually downloaded deb file of libnetcdf.so.6
>  (http://packages.ubuntu.com/precise/i386/libnetcdf6/download) and
>  installed in the directory /usr/lib but it is still giving me the
>  same error.

>

>  Thanks
>

>  Vasudev
>

>

>
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How to monitor Pipeline's running with SGE (multi-CPU cores)

2016-05-12 Thread Timothy B. Brown
Hi Gengyan,
 
Please try the following commands:
 
To list all the jobs running on your grid:
 
$ qstat -u "*"
 
Be sure to enclose the asterisk in double quotes as shown.
 
To list all the jobs running on your grid that were submitted under your
user account:
 
$ qstat -u `whoami`
 
Be sure to use "back quotes" around the whoami command (single quotes
that go from the upper-left towards the lower-right.)
 
Let's see if you see any jobs running that way.
 
Tim
 
On Thu, May 12, 2016, at 14:24, Gengyan Zhao wrote:
> Hello HCP Masters,
>
> My question is how can I know the SGE and FSL is setup properly and
> FSL
> is running with multi-CPU cores. How can I monitor the parallel
> runing of
> FSL with SGE?
>
> I'm using a 32-core 3.0GHz, 128GB RAM, Ubuntu 14.04 machine to
> run FSL.
> And I'm a HCP pipeline user. SGE was setup according to the
> instruction
> in the external link given by FSL website
> (http://chrisfilo.tumblr.com/post/579493955/how-to-configure-sun-grid-engine-for-fsl-under)
> .
> FSLPARALLEL=1. SGE_ROOT has not been set.
>
> Then when I run the pipeline, most of the time only 3.1% of the
> CPU is in
> usage. SInce 3.1%*32=1, most of the time only one core is
> occupied. With
> the command top and 1 to see the activity of each core in real time,
> almost all the time only one core is in 100% usage. With the command
> qstat -f, I can only see the queue configured by myself following the
> instruction in external link. This is the output of qstat -f, when FSL
> (actually PreFreeSurferPipelineBatch.sh in the HCP pipeline,
> which calls
> a bunch of FSL tools) is running.
>
> queuename qtype resv/used/tot. load_avg arch
> states
> --
> ---
> mainqueue@localhost BIP 0/0/31 -NA- -NA-
> au
>
> Thanks.
>
> Best,
> Gengyan
>
> Research Assistant
> Medical Physics, UW-Madison
>
> ___________
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How to monitor Pipeline's running with SGE (multi-CPU cores)

2016-05-16 Thread Timothy B. Brown
>> us...@humanconnectome.org; tbbr...@wustl.edu Subject: Re: [HCP-Users]
>> How to monitor Pipeline's running with SGE (multi-CPU cores)
>>
>> Hi Tim,
>>
>> Thank you for your quick response. I tried both of the commands while
>> the pipeline is running, and nothing showed up, totally nothing. I
>> copied the following from the terminal.
>>
>> $ qstat -u "*"
>>
>> $ qstat -u `whoami`
>> $
>>
>> For the installation and setup of SGE, I did them again as this link
>> https://scidom.wordpress.com/2012/01/18/sge-on-single-pc/
>>
>> Installing and Setting Up Sun Grid Engine on a Single ...[1]
>> scidom.wordpress.com
>> Click on the “Userset” tab and either create a new set of users or
>> highlight an existing one. To follow up with my exemplified setup,
>> highlight the ... I can see the qmon window pop up after I use the
>> qmon command, and I can see the sge processes running like this:
>> # ps aux | grep "sge"
>> sgeadmin 13315  0.0  0.0  55440  3848 ?        Sl   23:17   0:00
>> /usr/lib/gridengine/sge_execd
>> sgeadmin 13377  0.2  0.0 140184  7356 ?        Sl   23:17   0:00
>> /usr/lib/gridengine/sge_qmaster
>> root     13411  0.0  0.0  11748  2132 pts/4    S+   23:17   0:00 grep
>> --color=auto sge
>>
>>  So I assume the SGE was installed correctly, but not setup
>>  correctly. Besides setup a queue as mentioned in the link, I set
>>  FSLPARALLEL=1 in fsl.sh and sourced it, left FSL_ROOT unset and left
>>  FSLCLUSTER_DEFAULT_QUEUE unset.
>> Is there any error with my installation or configuration of the SGE?
>> Thanks.
>>
>> Best,
>> Gengyan
>>
>> From: Timothy B. Brown  Sent: Thursday, May 12,
>> 2016 3:26:22 PM To: Gengyan Zhao;  hcp-users@humanconnectome.org
>> Subject: Re: [HCP-Users] How to monitor Pipeline's running with SGE
>> (multi-CPU cores)
>>
>> Hi Gengyan,
>>
>> Please try the following commands:
>>
>> To list all the jobs running on your grid:
>>
>> $ qstat -u "*"
>>
>> Be sure to enclose the asterisk in double quotes as shown.
>>
>> To list all the jobs running on your grid that were submitted under
>> your user account:
>>
>> $ qstat -u `whoami`
>>
>> Be sure to use "back quotes" around the  whoami command (single
>> quotes that go from the upper-left towards the lower-right.)
>>
>> Let's see if you see any jobs running that way.
>>
>> Tim
>>
>> On Thu, May 12, 2016, at 14:24, Gengyan Zhao wrote:
>> > Hello HCP Masters,
>> >
>> > My question is how can I know the SGE and FSL is setup properly
>> > and FSL
>> > is running with multi-CPU cores. How can I monitor the parallel
>> > runing of
>> > FSL with SGE?
>> >
>> > I'm using a 32-core 3.0GHz, 128GB RAM, Ubuntu 14.04 machine to
>> > run FSL.
>> > And I'm a HCP pipeline user. SGE was setup according to the
>> > instruction
>> > in the external link given by FSL website
>> > (http://chrisfilo.tumblr.com/post/579493955/how-to-configure-sun-grid-engine-for-fsl-under)
>> > .
>> > FSLPARALLEL=1. SGE_ROOT has not been set.
>> >
>> > Then when I run the pipeline, most of the time only 3.1% of the CPU
>> > is in
>> > usage. SInce 3.1%*32=1, most of the time only one core is
>> > occupied. With
>> > the command top and 1 to see the activity of each core in real
>> > time,
>> > almost all the time only one core is in 100% usage. With the
>> > command
>> > qstat -f, I can only see the queue configured by myself
>> > following the
>> > instruction in external link. This is the output of qstat -f,
>> > when FSL
>> > (actually PreFreeSurferPipelineBatch.sh in the HCP pipeline, which
>> > calls
>> > a bunch of FSL tools) is running.
>> >
>> > queuename qtype resv/used/tot. load_avg arch
>> > states
>> > ---
>> > --
>> > mainqueue@localhost BIP 0/0/31 -NA- -NA-
>> > au
>> >
>> > Thanks.
>> >
>> > Best,
>> > Gengyan
>> >
>> > Research Assistant
>> > Medical Physics, UW-Madison
>> >
>> > ___
>> > HCP-Users mailing list
>> > HCP-Users@humanconnectome.org
>> >  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any
>> unauthorized use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify
>> the sender via telephone or
>> return mail.
>> ___
>>  HCP-Users mailing list HCP-Users@humanconnectome.org
>>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> ___
>>  HCP-Users mailing list HCP-Users@humanconnectome.org
>>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> ___
>>  HCP-Users mailing list HCP-Users@humanconnectome.org
>>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Links:

  1. https://scidom.wordpress.com/2012/01/18/sge-on-single-pc/
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Questions about Run Diffusion Preprocessing Parallelly

2016-05-23 Thread Timothy B. Brown
f
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-24 Thread Timothy B. Brown
Do you already have a cluster set up and available, or are you looking
to set up the cluster?
 
If you are looking to set up a cluster, do you have the group of
machines that you want to use for the cluster already available or are
you thinking of setting up a cluster "in the cloud" (e.g. a group of
Amazon EC2 instances)?
 
Tim
 
On Tue, May 24, 2016, at 12:18, Dev vasu wrote:
> Dear Sir,
>
>  Currently i am running HCP Pipelines on a Standalone computer but i
>  would like to set up the pipeline on a Linux cluster, if possible
>  could you please provide me some details concerning procedures that i
>  have to follow .
>
>
>
>  Thanks Vasudev
> ___
>  HCP-Users mailing list HCP-Users@humanconnectome.org
>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-24 Thread Timothy B. Brown
You will then need to learn how to write a script to be submitted to the
SLURM scheduler.
 
I am not familiar with the SLURM scheduler, but from very briefly
looking at the documentation that you supplied a link to, I would think
that the general form of a script for the SLURM scheduler would be:
 
#!/bin/bash
#... SLURM Scheduler directives ... e.g. #SBATCH ... telling the system
#such things as how much memory to expect to use
#and how long you expect the job to take to run
#... initialization of the modules system ... e.g. source
#/etc/profile.d/modules.sh
#... loading of the required software modules ... e.g. module load fsl
... command to run the HCP Pipeline Script you want to run (e.g.
Structural Preprocessing, MEG processing, etc.) for the subject and
scans you want to process
 
Once you've written such a script, (for example named myjob.cmd), it
appears that you would submit the job using a command like:
 
sbatch myjob.cmd
 
At the link that you provided, there is a section titled "Introductory
Articles and Tutorials by LRZ"  I would suggest you follow the links
provided in that section, read that documentation, and submit any
questions you have to the service desk for the LRZ (a link to the
service desk is also on the page you supplied a link to.)
 
Tim
 
On Tue, May 24, 2016, at 13:00, Dev vasu wrote:
> Dear Sir,
>
>  I have the cluster available , following is the link to it  :
>  http://www.lrz.de/services/compute/linux-cluster/
>
>
>  Thanks Vasudev
>
>
>
>
>
> On 24 May 2016 at 19:43, Timothy B. Brown  wrote:
>> Do you already have a cluster set up and available, or are you
>> looking to set up the cluster?
>>
>> If you are looking to set up a cluster, do you have the group of
>> machines that you want to use for the cluster already available or
>> are you thinking of setting up a cluster "in the cloud" (e.g. a group
>> of Amazon EC2 instances)?
>>
>> Tim
>>
>> On Tue, May 24, 2016, at 12:18, Dev vasu wrote:
>>> Dear Sir,
>>>
>>>  Currently i am running HCP Pipelines on a Standalone computer but i
>>>  would like to set up the pipeline on a Linux cluster, if possible
>>>  could you please provide me some details concerning procedures that
>>>  i have to follow .
>>>
>>>
>>>
>>>  Thanks Vasudev
>>> ___
>>>  HCP-Users mailing list HCP-Users@humanconnectome.org
>>>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any
>> unauthorized use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify
>> the sender via telephone or
>> return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu


The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Questions about Run Diffusion Preprocessing Parallelly

2016-06-14 Thread Timothy B. Brown
Hi Gengyan,
 
As far as I know, there is only one binary that is run as part of the
Diffusion Preprocessing Pipeline, that is parallelized. That is the eddy
binary and the latest versions of that binary are written do their
parallel processing on a GPU.  We have broken the Diffusion
Preprocessing Pipeline out into three separate phases (Pre Eddy, Eddy,
and Post Eddy) with a separate script for each phase so that 3
sequential jobs can be scheduled in such a way that the 2nd job, the
Eddy job, is scheduled to run on a processing node that has a GPU
available.  (The Eddy job is scheduled only to run upon successful
completion of the Pre Eddy job, and the Post Eddy job is scheduled only
to run upon successful completion of the Eddy job.) This significantly
speeds up the Eddy phase of the processing. However, the rest of the
processing (both before and after the Eddy phase) is, as far as I know,
single threaded.
 
The other significant parellelization we get is by doing just as Tim
Coalson described and scheduling the runs of many subjects
simultaneously.
 
Tim
 
On Tue, Jun 14, 2016, at 10:24, Gengyan Zhao wrote:
> Hi Tim,
>
> Thank you. However, the code I ran was
> "DiffusionPreprocessingBatch.sh", which should be able to involve
> multiple cores parallel computation within one subject, right? Because
> there is one line in the code saying:
> #Assume that submission nodes have OPENMP enabled (needed for eddy -
> at least 8 cores suggested for HCP data)
>
>  Thanks,
> Gengyan
>
> From: Timothy Coalson  Sent: Monday, June 13, 2016
> 9:13:54 PM To: Gengyan Zhao Cc: Glasser, Matthew; tbbr...@wustl.edu;
> hcp-users@humanconnectome.org Subject: Re: [HCP-Users] Questions about
> Run Diffusion Preprocessing Parallelly
>
> Many of the executables and operations in the pipelines do not use
> parallelization across cores within a single subject.  You may be
> better off setting up a queue with fewer cores available per slot, and
> submitting many subjects at once to the queue.  Make sure you set the
> queue name correctly in each "Batch" script.
>
> Note that some pipelines take much more memory than others, so some
> may allow you to run 32 subjects in parallel, while others may only be
> able to handle 2 or 3 in parallel.  Someone else will have to comment
> on the peak memory requirements per subject in each of the pipelines.
>
> Tim
>
>
> On Mon, Jun 13, 2016 at 8:53 PM, Gengyan Zhao
>  wrote:
>> Hi Matt, Tim Coalson and Tim Brown,
>>
>> Thank you very much for all your answers, and I'm sorry for my
>> delayed response.
>>
>> I ran the "DiffusionPreprocessingBatch.sh" to process the example
>> data of the subject 100307 on a 32-cores Ubuntu 14.04 Server. There
>> is no variable of OMP_NUM_THREADS being set, and the task was
>> submitted by an SGE queue with the setting of 16 cores and 16 slots.
>> However, when I used the command "top" to monitor the usage of the
>> cpu, only one core was occupied. Is there anything wrong? What shall
>> I do to involve multiple cores (OpenMP) then?
>>
>> Thank you very much.
>>
>> Best,
>> Gengyan
>>

>>
>> From: Timothy B. Brown  Sent: Monday, May 23, 2016
>> 3:45 PM To: Glasser, Matthew; Gengyan Zhao;  hcp-
>> us...@humanconnectome.org Subject: Re: [HCP-Users] Questions about
>> Run Diffusion Preprocessing Parallelly
>>
>> Gengyan,
>>
>> My understanding is as follows.  (Any OpenMP expert who sees holes in
>> my understanding should feel free to correct me...please.)
>>
>> If the compiled program/binary in use (e.g. eddy or wb_command) has
>> been compiled using the correct OpenMP related switches, then by
>> default, that program will use multi-threading in the places that multi-
>> threading was called for in the source code. It will use a maximum of
>> as many threads as there are are processing cores on the system on
>> which the program is running.
>>
>> So, if the machine you are using has 8 cores, then a properly
>> compiled OpenMP program will use up to 8 threads (parallel
>> executions).  But this assumes that the code has been written with
>> that many potential threads of independent execution and compiled and
>> linked with the correct OpenMP switches and OpenMP libraries.
>>
>> For programs like eddy and wb_command, this proper compiling and
>> linking to use OpenMP should already have been done for you.
>>
>> The only other thing that I know of that can limit the number of
>> threads (besides the actual source code) is the setting of the
>> environment variable OMP_NUM_THREADS. If this variable is set to a
>> numeric value (e.g. 4), th

Re: [HCP-Users] errors with HCP_Pipelines structural processing

2016-10-13 Thread Timothy B. Brown
reeSurferPipeline.sh.e6340 
contains errors messages, which mean that nothing wrong with the first 
two part.


The errors messages are :
mghRead(/100307/T1w/100307/mri/brain.finalsurfs.mgz, -1): could not 
open file
mghRead(/100307/T1w/100307/mri/brain.finalsurfs.mgz, -1): could not 
open file
mghRead(/100307/T1w/100307/mri/brain.finalsurfs.mgz, -1): could not 
open file
/home/yoson/Applications/HCP_Pipelines/PostFreeSurfer/scripts/FreeSurfer2CaretConvertAndRegisterNonlinear.sh: 
line 63: /100307/T1w/100307/mri/c_ras.mat: No such file or directory


Nothing wrong with environment variables setting. I also have the 
privilege to read, write and execute all files.


Your generous help would be greatly appreciated.

Best regards,
YC Yao



___________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] mounting the HCP data on an ec2 isntance instead of s3 access

2016-10-19 Thread Timothy B. Brown

Hello Denis,

I understand that Robert Oostenveld is planning to send you some 
materials from the latest HCP Course that illustrate how to mount the 
HCP OpenAccess S3 bucket as a directory accessible from a running EC2 
instance.


However, I'd like to clarify a few things.

First, the materials you will receive from Robert assume that you are 
using an Amazon EC2 instance (virtual machine) /that is based on an AMI 
supplied by NITRC/ (analogous to a DVD of software supplied and 
configured by NITRC to be loaded on your virtual machine). In fact the 
instructions show you how to create a new EC2 instance based on that 
NITRC AMI.


The folks at NITRC have done a lot of the work for you (like including 
the necessary software to mount an S3 bucket) and provided a web 
interface for you to specify your credentials for accessing the HCP 
OpenAccess S3 bucket. If you want to create an EC2 instance based on the 
NITRC AMI, then things should work well for you and the materials Robert 
sends to you should hopefully be helpful.


But this will not be particularly useful to you if you are using an EC2 
instance that is /not/ based upon the NITRC AMI. If that is the case, 
you will have to do a bit more work. You will need to install a tool 
called /s3fs/ ("S3 File System") on your instance and then configure 
s3fs to mount the HCP OpenAccess S3 bucket. This configuration will 
include storing your AWS access key information in a secure file on your 
running instance.


A good starting point for instructions for doing this can be found at: 
https://forums.aws.amazon.com/message.jspa?messageID=313009


This may not cover all the issues you encounter and you may have to 
search for other documentation on using s3fs under Linux to get things 
fully configured. The information at: 
https://rameshpalanisamy.wordpress.com/aws/adding-s3-bucket-and-mounting-it-to-linux/ 
may also be helpful.


Second, once you get the S3 bucket mounted, it is very important to 
realize that it is *read-only* from your system. By mounting the S3 
bucket using s3fs, you have not created an actual EBS volume on your 
system that contains the HCP OpenAccess data, just a mount point where 
you can /read/ the files in the S3 bucket.


You will likely want to create a separate EBS volume on which you will 
run pipelines, generate new files, and do any further analysis that you 
want to do. To work with the data, you will want the HCP OpenAccess S3 
bucket data to at least /appear/ to be on that separate EBS volume. One 
approach would be to selectively copy data files from the mounted S3 
data onto your EBS volume. However, this would be duplicating a lot of 
data onto the EBS volume, taking a long time and costing you money for 
storage of data that is already in the S3 bucket. I think a better 
approach is to create a directory structure on your EBS volume that 
contains files which are actually symbolic links to the read-only data 
that is accessible via your S3 mount point.


The materials that Robert sent (or will send) you contain instructions 
for how to get and use a script that I've written that will create such 
a directory structure of symbolic links. After looking over those 
instructions, if it is not obvious to you what script I'm referring to 
and how to use it, feel free to send a follow up question to me.


Hope that's helpful,

  Tim

On 10/18/2016 10:51 AM, Denis-Alexander Engemann wrote:

Dear HCPers,

I recently had a conversation with Robert who suggested to me that it 
should be possible to directly mount the HCP data like an EBS volume 
instead of using the s3 tools for copying the data file by file.

Any hint would be appreciated.

Cheers,
Denis

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradunwarp - "array index out of bounds" error in coef_file_parse routine

2016-10-19 Thread Timothy B. Brown
I have made this change and released version v1.0.3 of gradunwarp with 
this fix.


Thanks,

 Tim

On 10/17/2016 11:21 AM, Keith Jamison wrote:
That is the correct fix for this problem.  It has no side-effects. 
Some scanners (eg: Siemens 7T) have coefficients of even higher 
orders, so you can actually increase the siemens_cas to 100 or so to 
accommodate  the full range.  That value just determines the 
preallocation, and the matrix is trimmed down to the maximum 
appropriate size after reading the file.


I think this is going to be fixed in the next release of gradunwarp 
(though when that will be...?)


-Keith

On Sat, Oct 15, 2016 at 1:14 PM, Antonin Skoch <mailto:a...@ikem.cz>> wrote:


Dear experts,

I have an issue with processing my data (acquired at Siemens
Prisma 3T) by gradunwarp v1.0.2, downloaded from

https://github.com/Washington-University/gradunwarp/releases
<https://github.com/Washington-University/gradunwarp/releases>

It crashed with my coef.grad file by producing "array index out of
bounds" error in coef_file_parse routine.

I managed to get it working by increasing siemens_cas=20 in
core/globals.py

The corrected images look reasonable, with deformation going to
maximum approx 1-2 mm in off-isocenter regions.

Since I am not familiar with internals of gradunwarp, I would like
to assure myself the routine with my modification works OK and
there is no other unwanted consequence by increasing siemens_cas.
Could you please comment on?

Regards,

Antonin Skoch


___
HCP-Users mailing list
HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hc
<http://lists.humanconnectome.org/mailman/listinfo/hc> p-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] HCP Pipelines v3.19.0 pre-reqs?

2017-03-01 Thread Timothy B. Brown

Hi Michael,

Unfortunately, the situation gets a little "messy". The prerequisites 
for v3.19.0 are (for the most part) the same as those for v3.4.0. 
However, due to a change that was introduced in FSL at version 5.0.7 
that is not backward compatible, there are some specific pipelines that 
require different versions of FSL.


For the most part, FSL 5.0.6 is required to get results that are 
compatible with the HCP results. But for the diffusion preprocessing 
pipeline, you will need FSL version 5.0.9.  In fact, we actually used a 
version of the eddy program that is to be included in 5.0.10 (which I do 
not believe is officially released just yet.) See the message here 
(https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=fsl;5ade7210.1611) for 
some information on how to create a "patched" version of 5.0.9 that 
contains the 5.0.10 version of the eddy program.


For the ReApplyFix pipeline, you will also need version 5.0.9. (Since it 
does not use the eddy program, it does not matter if this is the 
actually released v5.0.9 or one that is "patched" to include the 5.0.10 
version of eddy.) Since the DeDriftAndResample pipeline invokes the 
ReApplyFix pipeline, it requires FSL version 5.0.9.


I have tried to include specific version checks and warnings in any 
pipelines that specifically require v5.0.9 or above.


I am in the process of trying to get or verify that all the pipelines in 
entire product can use the same version of FSL (v5.0.9+), but I had to 
maintain some of them in the FSL v5.0.6 state to make sure all our HCP 
data was processed the same way.


Wish I had a simpler answer for you,

  Tim

On 03/01/2017 02:40 PM, Michael Stauffer wrote:

Hi,

I'm installing v3.19 of the HCP Pipelines for a user on my system. Are 
the pre-reqs the same for 3.4.0 that I already have installed? The 
documentation points to the 3.4.0 release notes.


Thanks

-M

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Shared Library Error

2017-04-25 Thread Timothy B. Brown

Hi Tim,

What version of the HCP Pipeline Scripts are you using?

That error message, error while loading shared libraries: 
libmwlaunchermain, is typical when the compiled MATLAB executable cannot 
find the MCR.


As of the last official release (v3.21.0), several of the scripts still 
have hard coded paths to where they expect to find the MCR when using 
compiled MATLAB. As Matt points out, I'm in the process of updating the 
code so that it uses an environment variable to tell it where to look 
for the MCR. In the meantime, you can look for the places in the 
PostFix.sh script where the script variable matlab_compiler_runtime is 
set and change it to the appropriate path to the directory in which the 
R2013a MCR is installed on your system.


Hopefully, that will resolve this issue for you.

  Tim

On 04/25/2017 10:08 AM, Glasser, Matthew wrote:
We are in the process of updating the codebase that uses matlab to 
hopefully make it easier for folks to use compiled matlab.  Until that 
is completed, I would recommend using interpreted matlab if you can.


Peace,

Matt.

From: <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Timothy 
Hendrickson mailto:hendr...@umn.edu>>

Date: Tuesday, April 25, 2017 at 10:03 AM
To: "hcp-users@humanconnectome.org 
<mailto:hcp-users@humanconnectome.org>" <mailto:HCP-Users@humanconnectome.org>>

Subject: [HCP-Users] Shared Library Error

Hello,

I am attempting to run the PostFix processing stream. I downloaded the 
Matlab2013a MCR version as required to run the stream.
I ran into problems once the PostFix stream attempted to call shared 
libraries through the LSB executable prepareICAs.


This is the error I received: error while loading shared libraries: 
libmwlaunchermain.

so: cannot open shared object file: No such file or directory

-Tim

Timothy Hendrickson
Department of Psychiatry
University of Minnesota
Bioinformatics and Computational Biology M.S. Candidate
Office: 612-624-6441
Mobile: 507-259-3434 (texts okay)

___
HCP-Users mailing list
HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected 
Healthcare Information or other information of a sensitive nature. If 
you are not the intended recipient, be advised that any unauthorized 
use, disclosure, copying or the taking of any action in reliance on 
the contents of this information is strictly prohibited. If you have 
received this email in error, please immediately notify the sender via 
telephone or return mail.




--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Timothy B. Brown

Dear Qinqin Li,

First of all, you are correct that in using the latest version of the 
NITRC-CE for HCP, the 900 subjects release is mounted at /s3/hcp. We 
just recently got the data from the 1200 subjects release fully uploaded 
to the S3 bucket. I am working with the NITRC folks to get the AMI 
modified to mount the 1200 subjects release data.


As for using s3fs yourself to mount the HCP_1200 data, it seems to me 
that you are doing the right thing by putting your access key and secret 
access key in the ~/.passwd-s3fs file. I think that the credentials you 
have that gave you access to the HCP_900 data /should/ also give you 
access to the HCP_1200 data. I will be running a test shortly to verify 
that that is working as I expect. In the meantime, you can also do some 
helpful testing from your end.


Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli). Be sure to follow the configuration 
instructions at 
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html to 
run the aws configure command. This will get your AWS access key id and 
AWS secret access key into a configuration file for the AWS command line 
tool similar to way you've placed that information into a file for s3fs.


Then try issuing commands like the following:

   $ aws s3 ls s3://hcp-openaccess/HCP_900/

   $ aws s3 ls s3://hcp-openaccess/HCP_1200/

If both of these work and give you a long list of subject ID entries 
that look something like:


PRE 100206/
PRE 100307/
PRE 100408/
...

then your credentials are working for both the 900 subjects release and 
the 1200 subjects release.


If the HCP_900 listing works, but the HCP_1200 listing does not, then we 
will need to arrange for you to get different credentials.


  Tim

On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,

I sorry to bother you again with same problem.

I used default options and mounted data successfully. But when I 
checked /s3/hcp, I found that data in it has only 900 subjects. 
Obviously, it's not the latest 1200-release data.



Since I want to analyse the latest version of data, I use s3fs to 
achieve my goal.

I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs

It failed everytime. In the syslog file, I found error below:


I got my credential keys from connectome DB, and I quiet sure that I 
put it right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess 
when using s3fs to mount data? If the answer is yes, do you have any 
suggestion for me?


(note:  At first, I thought the problem may due to the  version of 
s3fs. So I created a new instance based on Amazon Linux AMI, and then 
download the lastest version of s3fs. But still, I failed because 
/'invalid credentials/')


thank you very much!

Best,

Qinqin Li

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Timothy B. Brown

Dear Qinqin Li,

Based on my checking so far, AWS credentials that give you access to the 
HCP_900 section of the S3 bucket should also give you access to the 
HCP_1200 section of the bucket.


One thing I would suggest is to go back to using the mount point 
provided by the NITRC-CE-HCP environment, but edit the system file that 
tells the system what to mount at /s3/hcp.


You will need to edit the file /etc/fstab. You will need to fire up the 
editor you use to make this change via sudo to be able to edit this file.


You should find a line in the /etc/fstab file that starts with:

   s3fs#hcp-openaccess:/HCP_900

Change the start of that line to:

   s3fs#hcp-openaccess:/HCP_1200

Once you make this change and /stop and restart your instance/, then 
what is mounted at /s3/hcp should be the 1200 subjects release data.


  Tim

On 05/15/2017 10:07 AM, Timothy B. Brown wrote:


Dear Qinqin Li,

First of all, you are correct that in using the latest version of the 
NITRC-CE for HCP, the 900 subjects release is mounted at /s3/hcp. We 
just recently got the data from the 1200 subjects release fully 
uploaded to the S3 bucket. I am working with the NITRC folks to get 
the AMI modified to mount the 1200 subjects release data.


As for using s3fs yourself to mount the HCP_1200 data, it seems to me 
that you are doing the right thing by putting your access key and 
secret access key in the ~/.passwd-s3fs file. I think that the 
credentials you have that gave you access to the HCP_900 data /should/ 
also give you access to the HCP_1200 data. I will be running a test 
shortly to verify that that is working as I expect. In the meantime, 
you can also do some helpful testing from your end.


Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli <https://aws.amazon.com/cli>). Be sure to 
follow the configuration instructions at 
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html 
to run the aws configure command. This will get your AWS access key id 
and AWS secret access key into a configuration file for the AWS 
command line tool similar to way you've placed that information into a 
file for s3fs.


Then try issuing commands like the following:

$ aws s3 ls s3://hcp-openaccess/HCP_900/

$ aws s3 ls s3://hcp-openaccess/HCP_1200/

If both of these work and give you a long list of subject ID entries 
that look something like:


PRE 100206/
PRE 100307/
PRE 100408/
...

then your credentials are working for both the 900 subjects release 
and the 1200 subjects release.


If the HCP_900 listing works, but the HCP_1200 listing does not, then 
we will need to arrange for you to get different credentials.


  Tim

On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,

I sorry to bother you again with same problem.

I used default options and mounted data successfully. But when I 
checked /s3/hcp, I found that data in it has only 900 subjects. 
Obviously, it's not the latest 1200-release data.



Since I want to analyse the latest version of data, I use s3fs to 
achieve my goal.

I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs

It failed everytime. In the syslog file, I found error below:


I got my credential keys from connectome DB, and I quiet sure that I 
put it right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess 
when using s3fs to mount data? If the answer is yes, do you have any 
suggestion for me?


(note:  At first, I thought the problem may due to the  version of 
s3fs. So I created a new instance based on Amazon Linux AMI, and then 
download the lastest version of s3fs. But still, I failed because 
/'invalid credentials/')


thank you very much!

Best,

Qinqin Li

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, 
be advised that any unauthorized use, disclosure, copying or the 
taking of any action in reliance on the contents of this information 
is strictly prohibited. If you have received this email in error, 
please immediately notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and 

Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-16 Thread Timothy B. Brown

Hi Qinqin Li,

If you leave the /etc/fstab file with s3fs#hcp-openaccess:/HCP_1200 in 
it instead of s3fs#hcp-openaccess:/HCP_900, then every time your system 
boots up, it should have the HCP_1200 data mounted at /s3/hcp. You 
should /not/ have to edit the /etc/fstab file again or issue a separate 
mount command to get access to the data each time you want to use it.


Using the AWS Command Line Interface (AWSCLI) tool is different from 
actually making the data available at a mount point. If the data is not 
mounted via s3fs, then you can always access it using commands like the 
aws s3 ls command that I asked you to use previously. However, in order 
for programs and scripts on your system (your instance) to open and use 
the files, you will then need to use aws commands to copy the files to 
your file system.


For example, given that we know that the file 
s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz exists, a 
command like:


   $ wb_view s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz

would */not/* be able to open that T1w.nii.gz file and allow you to view 
it. The s3 bucket doesn't supply an actual file system that allows this 
type of access. That is what s3fs is providing for you.


However, assuming you have a tmp subdirectory in your home directory, a 
pair of commands like:


   $ aws s3 cp
   s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz ~/tmp
   $ wb_view ~/tmp/T1w.nii.gz

would copy the T1w.nii.gz file from the S3 bucket to your ~/tmp 
directory and allow you to view it using Connectome Workbench.


There is also an aws s3 sync command that can be used to 
copy/synchronize whole "directories" of data from the S3 bucket. For 
example:


   $ aws s3 sync s3://hcp-openaccess/HCP_1200/100206 /data/100206

would copy the entire 100206 subject's data to the local directory 
/data/100206.


I should note that copying that entire directory means copying a fairly 
large amount of data. If you were copying it to a local machine (e.g. 
your own computer), this might take a long time (e.g. hours). In my 
experience, copying it from an S3 bucket to a running Amazon EC2 
instance still takes a while (about 15 minutes), but this is much more 
reasonable. Also, the aws s3 sync command works somewhat like the 
standard Un*x rsync command in that it determines whether the files need 
to be copied before copying them. If any of the files already exist 
locally and are unchanged, then those files are not copied from the S3 
bucket.


  Tim

On 05/16/2017 09:23 AM, Irisqql0922 wrote:

Hi Tim,

I change first line in /etc/fstab file to

s3fs#hcp-openaccess:/HCP_1200,

and it worked!!! Thank you very much!

But it's not very convenient if I do it every time when I need to 
mount 1200-release data. The test you ask me to do yesterday can mount 
1200-release data directly, right?


Best,
Qinqin Li

On 05/16/2017 03:04,Timothy B. Brown 
<mailto:tbbr...@wustl.edu> wrote:


Dear Qinqin Li,

Based on my checking so far, AWS credentials that give you access
to the HCP_900 section of the S3 bucket should also give you
access to the HCP_1200 section of the bucket.

One thing I would suggest is to go back to using the mount point
provided by the NITRC-CE-HCP environment, but edit the system file
that tells the system what to mount at /s3/hcp.

You will need to edit the file /etc/fstab. You will need to fire
up the editor you use to make this change via sudo to be able to
edit this file.

You should find a line in the /etc/fstab file that starts with:

s3fs#hcp-openaccess:/HCP_900

Change the start of that line to:

s3fs#hcp-openaccess:/HCP_1200

Once you make this change and /stop and restart your instance/,
then what is mounted at /s3/hcp should be the 1200 subjects
release data.

  Tim

On 05/15/2017 10:07 AM, Timothy B. Brown wrote:


Dear Qinqin Li,

First of all, you are correct that in using the latest version of
the NITRC-CE for HCP, the 900 subjects release is mounted at
/s3/hcp. We just recently got the data from the 1200 subjects
release fully uploaded to the S3 bucket. I am working with the
NITRC folks to get the AMI modified to mount the 1200 subjects
release data.

As for using s3fs yourself to mount the HCP_1200 data, it seems
to me that you are doing the right thing by putting your access
key and secret access key in the ~/.passwd-s3fs file. I think
that the credentials you have that gave you access to the HCP_900
data /should/ also give you access to the HCP_1200 data. I will
be running a test shortly to verify that that is working as I
expect. In the meantime, you can also do some helpful testing
from your end.

Please try installing the AWS command line interface tool (see
https://aws.amazon.com/cli <https://aws.amazon.com/cli>). Be sure
to follow the c

Re: [HCP-Users] [Mac OSX] ERROR from cp --preserve=timestamps when running FreeSurferHiresWhite

2017-05-17 Thread Timothy B. Brown

Dear Sang-Young,

I would suggest that you replace the cp --preserve=timestamps commands 
in the scripts that are causing the problems with cp -p commands.


Until March 2016, those scripts used cp -p with the specific intent of 
preserving timestamps on the files that were copied. In the environment 
we currently use to run pipelines here at Washington University in St. 
Louis, the cp -p command was failing. The cp -p command is equivalent to 
cp --preserve=mode,ownership,timestamps. In our environment, it was not 
possible to preserve the ownership of files when doing those copies. 
Thus the cp -p command was failing.


Since the actual intent of the command was only to preserve the 
timestamps and preserving the other items (mode and ownership) was not 
particularly important, the commands were changed to cp 
--preserve=timestamps (asking only to preserve what is necessary).


We have since learned that for at least some versions of Mac OSX, the 
--preserve= option is not supported for the cp command. However, the -p 
option seems to still be supported in Mac OSX.


The situation now is that changing the command back to cp -p will not 
work for us, and leaving it as cp --preserve=timestamps will not work 
for people using some versions of Mac OSX.


It is my understanding that using Gnu CoreUtils on Mac OSX makes the 
--preserve= option to the cp command available under Mac OSX. But I have 
no experience with that, so I'm not in a position to recommend it.


I'm sorry for your frustration in dealing with this issue. Thank you for 
asking this on the list. Hopefully, this reply will prevent some others 
from experiencing the same frustration.


  Tim

On 05/17/2017 09:31 AM, Sang-Young Kim wrote:

Dear HCP users:

In FreeSureferPipeline script (e.g., FreeSurferHiresWhite), it looks like that 
cp --preserve=timestamps option does not work in Mac OSX.
I have spent lots of times to fix this problem and cause a lot of frustration.

My question is that whether I can use the option rsync -t to copy lh.white, 
lh.curv etc.. as an alternative method?
I’m not sure rsync -t can also be used to preserve the timestamps on the files 
copied.
Are there any other method to fix this problem?

It’ll be greatly appreciated if you can give me any solution.

Thanks in advance.

Sang-Young



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Libnetcdf.so.6 error during mris_make_surfaces

2017-06-28 Thread Timothy B. Brown

Hi Lisa,

Sorry for the delayed response. I've been working on a solution to the 
problem you raised.


As you noted, the libnetcdf.so.6 file is no longer available at the 
places previously suggested.


Please try the following:

1. Install the hdf5 library files that are prerequisites for libnetcdf6

 * Create a directory in your home directory called mylib

   $ cd
   $ mkdir mylib

 * Download libhdf5-serial-1.8.4_1.8.4-patch1-3ubuntu2_amd64.deb

   $ cd ~/Downloads
   $ wget
   
http://archive.ubuntu.com/ubuntu/pool/universe/h/hdf5/libhdf5-serial-1.8.4_1.8.4-patch1-3ubuntu2_amd64.deb

 o This assumes you are running a 64-bit version of Ubuntu
   which hopefully you are. The HCP Pipelines themselves will
   not run correctly on a 32-bit version of Ubuntu.

 * Extract the shared library files from the .deb file, and put
   them in your mylib directory.

   $ cd ~/mylib
   $ ar xv
   ~/Downloads/libhdf5-serial-1.8.4_1.8.4-patch1-3ubuntu2_amd64.deb
   $ rm control.tar.gz debian-binary
   $ tar xvf data.tar.gz
   $ mv ~/mylib/usr/lib/* .
   $ rm -rf ~/mylib/usr
   $ rm data.tar.gz

2. Install libnetcdf6 library files

 * Download libnetcdf6_4.1.1-6_amd64.deb

   $ cd ~/Downloads
   $ wget
   
http://archive.ubuntu.com/ubuntu/pool/universe/n/netcdf/libnetcdf6_4.1.1-6_amd64.deb

 * Extract the shared library files from this .deb file and put
   them in your mylib directory.

   $ cd ~/mylib
   $ ar xv ~/Downloads/libnetcdf6_4.1.1-6_amd64.deb
   $ rm control.tar.gz debian-binary
   $ tar xvf data.tar.gz
   $ mv ~/mylib/usr/lib/* .
   $ rm -rf ~/mylib/usr
   $ rm data.tar.gz

3. Change your LD_LIBRARY_PATH environment variable

   Use a text editor to add the following statements to the
   .bash_profile file in your home directory. (If a .bash_profile file
   does not exist in your home directory, simply create one with the
   following contents.)

   if [ -z "${LD_LIBRARY_PATH}" ] ; then
export LD_LIBRARY_PATH=${HOME}/mylib
   else
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${HOME}/mylib
   fi

4. Completely log off and log back in again. Try to run FreeSurfer's
   mris_make_surfaces binary using the environment set up you use for
   running the pipeline.

   $ source
   
//
   $ mris_make_surfaces

If the response to the above command is:

   mris_make_surfaces: error while loading shared libraries:
   libnetcdf.so.6: cannot open ...

then we haven't solved your problem.

On the other hand, if the response is the help information for the 
mris_make_surfaces program, something like:


 Help

   NAME
mris_make_surfaces

   SYNOPSIS
mris_make_surfaces [options]  

   DESCRIPTION
This program positions the tessellation of the cortical
   surface at the
...

then I believe you will be able to run mris_make_surfaces and thus the 
pipeline.


Either way, please respond to the list to let us know whether this 
works. If it does work, then others will be able to find this thread as 
you found the previous threads about this topic. If it doesn't work, 
we'll investigate further.


Best Regards,

  Tim

On 06/27/2017 03:46 AM, Lisa Kramarenko wrote:

Dear experts,

I am getting the following error when the FreeSurferPipeline comes to 
the mris_make surfaces step:


mris_make_surfaces: error while loading shared libraries: 
libnetcdf.so.6: cannot open shared object file: No such file or directory


It happens when I run it on Ubuntu 14 and 16, however, it works just 
fine on Ubuntu 12. I read this thread 
http://www.mail-archive.com/hcp-users@humanconnectome.org/msg00782.html 
 and 
https://www.mail-archive.com/hcp-users@humanconnectome.org/msg02419.html, 
however I cannot find the libnetcdf.so.6 at all in order to install it.
Do you have an idea about how to fix it and enable running this 
pipeline on a newer Ubuntu?

Thanks a lot!
Lisa

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


__

Re: [HCP-Users] pipeline structural PostFreesurfer wmparc not found

2017-06-28 Thread Timothy B. Brown
hD
> Assistant Professional Researcher
>
> University of California, San Francisco
> 675 Nelson Rising Lane, Suite 190 | San Francisco, CA 94158
>
> Language Neurobiology Lab
> http://albalab.ucsf.edu/
> Memory and Aging Center
> http://memory.ucsf.edu/
> UCSF Dyslexia Center
> http://dyslexia.ucsf.edu/
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected 
Healthcare Information or other information of a sensitive nature. If 
you are not the intended recipient, be advised that any unauthorized 
use, disclosure, copying or the taking of any action in reliance on 
the contents of this information is strictly prohibited. If you have 
received this email in error, please immediately notify the sender via 
telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] pipeline structural PostFreesurfer wmparc not found

2017-06-29 Thread Timothy B. Brown
hD/*
/Assistant Professional Researcher/


*University of California, San Francisco*
675 Nelson Rising Lane, Suite 190 | San Francisco, CA 94158

Language Neurobiology Lab

http://albalab.ucsf.edu/

Memory and Aging Center**

_http://memory.ucsf.edu/_

UCSF Dyslexia Center

http://dyslexia.ucsf.edu/



*From:* Glasser, Matthew mailto:glass...@wustl.edu>>
*Sent:* Wednesday, June 28, 2017 4:10:27 PM
*To:* Mandelli, Maria Luisa; Brown, Tim; hcp-users@humanconnectome.org 
<mailto:hcp-users@humanconnectome.org>

*Cc:* Dierker, Donna
*Subject:* Re: [HCP-Users] pipeline structural PostFreesurfer wmparc 
not found
That’s weird as I change the subject directory in the pipelines but 
don’t normally have this error.  FreeSurfer just makes a symlink. 
 What version of FreeSurfer are you using?  Are you calling it using 
the HCP Pipelines?


Peace,

Matt.

From: "Mandelli, Maria Luisa" <mailto:marialuisa.mande...@ucsf.edu>>

Date: Wednesday, June 28, 2017 at 6:06 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"Brown, Tim" mailto:tbbr...@wustl.edu>>, 
"hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>

Cc: "Dierker, Donna" mailto:do...@wustl.edu>>
Subject: Re: [HCP-Users] pipeline structural PostFreesurfer wmparc not 
found


Hi!thank you for your help!

the error is in the log file


Invalid argument
ERROR reading 
/mnt/macdata/groups/language/malu-cloud/HCP_project/Pipelines_ExampleData/subjects/hcp350014/T1w/fsaverage/label/lh.BA1.label
Linux mmandelli-vm 3.2.0-126-virtual #169-Ubuntu SMP Fri Mar 31 
14:47:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


So actually it s true I dont have the fsaverage directory in my 
subject folder but it s in ${FREESURFER_HOME}/subjects/fsaverage



Now the my $SUBJECT_DIR is set where are my data not $FREESURFER_HOME


am I missing a link somewhere?


*/Maria Luisa Mandelli, PhD/*
/Assistant Professional Researcher/


*University of California, San Francisco*
675 Nelson Rising Lane, Suite 190 | San Francisco, CA 94158

Language Neurobiology Lab

http://albalab.ucsf.edu/

Memory and Aging Center**

_http://memory.ucsf.edu/_

UCSF Dyslexia Center

http://dyslexia.ucsf.edu/



*From:* Glasser, Matthew mailto:glass...@wustl.edu>>
*Sent:* Wednesday, June 28, 2017 4:00:56 PM
*To:* Brown, Tim; hcp-users@humanconnectome.org 
<mailto:hcp-users@humanconnectome.org>; Mandelli, Maria Luisa

*Cc:* Dierker, Donna
*Subject:* Re: [HCP-Users] pipeline structural PostFreesurfer wmparc 
not found
Perhaps I missed it but I don’t actually see the error messages you 
are getting.


Peace,

Matt.

From: <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of "Timothy 
B. Brown" mailto:tbbr...@wustl.edu>>

Organization: Washington University in St. Louis
Date: Wednesday, June 28, 2017 at 5:00 PM
To: "hcp-users@humanconnectome.org 
<mailto:hcp-users@humanconnectome.org>" <mailto:hcp-users@humanconnectome.org>>, "marialuisa.mande...@ucsf.edu 
<mailto:marialuisa.mande...@ucsf.edu>" <mailto:marialuisa.mande...@ucsf.edu>>

Cc: "Dierker, Donna" mailto:do...@wustl.edu>>
Subject: Re: [HCP-Users] pipeline structural PostFreesurfer wmparc not 
found


Hi Maria,


My bet is that the error you are seeing is coming from the last call 
to recon-all in the FreeSurferPipeline.sh script. This would be in the 
statements that look like:


#Final Recon-all Steps
log_Msg "Final Recon-all Steps"
recon-all -subjid $SubjectID -sd $SubjectDIR -surfvolume
-parcstats -cortparc2 -parcstats2 -cortparc3 -parcstats3
-cortribbon -segstats -aparc2aseg -wmparc -balabels
-label-exvivo-ec -openmp ${num_cores} ${seed_cmd_appendix}

log_Msg "Completed"

In particular, it would be the line that is shown in red above. (Note, 
depending on the version of the HCP Pipelines you are using, your 
version of the recon-all command may not have the -openmp ${num_cores} 
or the ${seed_cmd_appendix} in it.)



The -balabels option is asking the recon-all command to "Create 
Brodmann area labels by mapping from fsaverage".



You probably won't have an fsaverage directory in the subject 
directory which you are processing (in your case the 
/HCP_project/Pipelines_ExampleData/subjects/hcp350014/T1w directory). 
In that case, recon-all creates one for you as a symbolic link to the 
$FREESURFER_HOME/subjects/fsaverage directory.



It is in a subdirectory of that created fsaverage directory that files 
like lh.BA1.label should be found.


So, the first things that needs to be determined in your case are:

  * At the time of this failure, does the
/HCP_project/Pipelines_ExampleData/subjects/hcp

Re: [HCP-Users] hcp_fix script and settings

2017-11-06 Thread Timothy B. Brown

He Leah,

It should not be necessary for you to change any of the lines you show 
below from the FIX settings.sh file.


Of course, there are certainly lines after those lines in sections 
labeled "Part I MATLAB settings", "Part II Octave settings", and "Part 
III General Settings" that you will need to review and update to reflect 
your environment. But I believe that the lines you have copied into your 
email should be fine "as is".


  Tim

On 11/02/2017 04:28 PM, Marta Moreno wrote:

Dear experts,

I want to use the “hcp_fix" script for clean up of resting state data 
after HCP pipelines. I am editing “settings.sh" and wanted to know if 
I have to edit these lines copied below:


# Settings file for FIX
# Modify these settings based on your system setup
FIXVERSION=1.065
#   (actually this version is 1.065 - see wiki page for details)

# Get the OS and CPU type
FSL_FIX_OS=`uname -s`
FSL_FIX_ARCH=`uname -m`

if [ -z "${FSL_FIXDIR}" ]; then
FSL_FIXDIR=$( cd $(dirname $0) ; pwd)
export FSL_FIXDIR
fi

Thank you very much,

Leah


On Oct 30, 2017, at 10:29 AM, Marta Moreno <mailto:mmorenoort...@icloud.com>> wrote:


Great. Thank you very much.

On Oct 29, 2017, at 4:44 PM, Glasser, Matthew <mailto:glass...@wustl.edu>> wrote:


Yes that is what we do.

Peace,

Matt.

From: Marta Moreno <mailto:mmorenoort...@icloud.com>>

Date: Sunday, October 29, 2017 at 12:03 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: HCP Users <mailto:hcp-users@humanconnectome.org>>

Subject: Re: [HCP-Users] ICA+FIX cleanup scripts

Thanks for your answer. I just want to use the “hcp_fix" script 
for clean up of resting state data after HCP pipelines. I only need 
to know if I just need to run the “hcp_fix" on data already 
preprocessed with HCP pipelines, after which I get my clean 
dtseries.nii ready for FC analyses.


Much appreciation.




On Oct 29, 2017, at 11:41 AM, Glasser, Matthew <mailto:glass...@wustl.edu>> wrote:


I think there is a readme with instructions on set up.  If you have 
specific questions about those, ICA+FIX questions might be best 
asked on the FSL list, as the FSL programmers don’t watch this 
list.  Here we can answer questions about CIFTI or approaching 
clean up of resting state data.


Peace,

Matt.

From: <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Marta 
Moreno mailto:mmorenoort...@icloud.com>>

Date: Sunday, October 29, 2017 at 9:11 AM
To: HCP Users <mailto:hcp-users@humanconnectome.org>>

Subject: [HCP-Users] ICA+FIX cleanup scripts

Dear experts,

I would like to use ICA+FIX cleanup after HCP minimal preprocessing 
pipelines on high quality resting-state data acquired with a GE 
scanner. HCP minimal preprocessing is working fine and installed 
locally. I have downloaded now the tar file with ICA+FIX cleanup 
scripts. Could you please give me some guidelines on what are the 
steps to follow to have them running? I am using a MacPro.


Much appreciation for your help.



___
HCP-Users mailing list
HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users








___________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Processing NITRC data using spot instances on AWS

2017-12-11 Thread Timothy B. Brown

Hi Michelle,

I have not yet had the opportunity to make use of spot instances to run 
data through HCP processing pipelines. So please understand that I may 
not be able to adequately answer your questions. However, I'll try to 
make some helpful suggestions.


With regard to there being no NITRC option for the AMI selection menu, 
I'm assuming that you are starting from your AWS EC2 console 
(https://console.aws.amazon.com/ec2). From that page, you should see the 
"Spot Requests" link under the "INSTANCES" heading along the left hand 
side. You then select the "Spot Requests" link to get taken to the 
"Request Spot Instances" page. Again, I'm assuming that you then select 
the "Request Spot Instances" button at the bottom of the table of your 
current Spot requests.


Near the top of the "Requirements" section on the "Request Spot 
Instances" page, there is an AMI specification row with a pull down 
selector that gives you choices of AMI's to use for your spot instance 
request. I'm guessing that when you say that "There's no NITRC option 
for the AMI selection menu, and when I try to search for it, there are 
no results" you mean that the pull down menu doesn't include a NITRC 
option, and you selected the "Search for AMI" button to the right of the 
AMI pull down menu and fill in the search field with "NITRC" or 
something similar. This indeed seems to give no results.


The default group of AMIs in which to search (see the pull down just to 
the right of the search text field) is "My AMIs". If you 
select/pull-down that menu and choose the AMI group "Community AMIs", 
then I think that your search for a NITRC AMI will return some AMIs from 
which to select. (At least it does when I try it.)


However...I don't think it would be appropriate for you to request a 
Spot Instance using one of the NITRC HCP AMIs. (Again, recall that I 
haven't had the opportunity to use spot instances, so I'm hypothesizing 
here.) You'll recall that the NITRC HCP AMI is not fully "ready to go" 
for logging in and running pipelines. There is some additional 
configuration that you need to do (install the FreeSurfer license, mount 
the S3 data, create a user account, etc.) before any instance created 
from a NITRC HCP AMI is really ready to use. My guess is that you would 
need to have an AMI that is fully "ready to use" prepared to use for a 
spot instance.  My thought is that the way to do this would be to start 
with the NITRC AMI, get an instance configured so that it is all ready 
to use, then use that instance as the starting point for creating your 
own AMI. Then the AMI that you created would be the one you would choose 
for your spot instances. (The AMI that you created would show up in the 
"My AMIs" group, which seems to be why that group is the default for 
searching for AMIs when creating a spot request.)


As for your connection issue, I suspect you are correct that there is 
something wrong with your security settings. Which type of instance are 
you trying to access via DNS and/or SSH? A non-NITRC spot instance or a 
NITRC-AMI-based on-demand instance that you've created?


  Tim

On 12/06/2017 05:10 PM, Michelle Chiu wrote:

Hi,

Does anyone know how to create a spot instance on AWS with the NITRC 
Computational AMI?


Configuring and using on-demand instances has been really 
straightforward thanks to the wiki page1, but when I try to make a 
spot instance, I have two main issues: 1) There's no NITRC option for 
the AMI selection menu, and when I try to search for it, there are no 
results 2a) when I try to access the instance via public DNS, my 
requests keep timing out 2b) when I try to access the instance via 
SSH, I get the error message that 'the resource is temporarily 
unavailable'
I suspect that my connection issues are because I didn't choose the 
security options correctly, so I'll try different configurations. If I 
can connect to the non-NITRC spot instance, I'll be able to transfer 
the s3 humanconnectome data from an on-demand instance and download 
FSL, but I wanted to see whether I've missed something.

Thanks!

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this infor

Re: [HCP-Users] NITRC-CE HCP v0.44 VS v0.45 AMI

2018-02-08 Thread Timothy B. Brown

Hi Michelle,

I am not yet familiar with what changes NITRC has incorporated into the 
newer NITRC-CE HCP AMI (v0.45). However, I don't believe that the 
whether you are using v0.44 or v0.45 would have any relationship to the 
problem you describe. The error message you provided seems to relate to 
the authorization scheme supported by Amazon's S3 service. (I am 
assuming that by a "Standard bucket" you mean an S3 bucket. I'm guessing 
you've created your own S3 bucket?)


Please have a look at the following pages:

 * https://github.com/yegor256/s3auth/issues/214
 * 
https://stackoverflow.com/questions/26533245/the-authorization-mechanism-you-have-provided-is-not-supported-please-use-aws4
 * https://www.s3express.com/kb/item44.htm
 * 
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
 * https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html

These seem to indicate that the problem is the mechanism being used to 
send credentials (username and password) to the S3 bucket to gain access 
to it and the version of the protocol being used to send the credentials.


What exactly are you trying to do when you get this error? Are you 
trying to move the data from the EBS volume to the S3 bucket? If so, 
what tool or command are you using to do that? Are you simply trying 
access data already in the S3 bucket? Again, if so, what tool or command 
are you using when you get the error message?


  Tim

On 02/04/2018 04:08 PM, Michelle Chiu wrote:

Hello,

I recently tried to transfer my EBS volume to Standard bucket for 
cheaper, more long-term storage but was met with the following error:


|The authorization mechanism you have provided is not supported. Please 
use AWS4-HMAC-SHA256.|


I originally created a spot instance with EBS volume attached using 
the NITRC-CE HCP v0.44 AMI (indicated by the connectome wiki), but I 
noticed there's a more recent v0.45 that was uploaded in January 2018.


Is the authorization error I'm getting due to the environmental 
settings of v0.44 and should I be using v0.45 instead?

Please advise. Thank you!

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Error in FreeSurfer processing "recon-all.v6.hires: command not found"

2018-03-01 Thread Timothy B. Brown

Hi Pubuditha,


I'm assuming that you are using the latest release of the HCP Pipelines 
(v3.25.0). The changes that are causing you a problem are part of those 
being made as part of conversion to using FreeSurfer version 6. Those 
changes should not have made it in to a release as of yet. I apologize 
for that mistake.



Please try using version v3.24.0. I have briefly reviewed that version, 
and I believe that those FreeSurfer 6 related changes were not included 
in that release.



In the meantime, I will back those changes out of version v3.25.0 and 
create a new "bug-fix" release (v3.25.1).



Thank you for pointing this problem out so that others (hopefully) do 
not have to encounter it also.



Best Regards,


  Tim


On 03/01/2018 02:12 PM, Glasser, Matthew wrote:

We are working on a response to your question.

Peace,

Matt.

From: <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Pubuditha 
Abeyasinghe mailto:pabey...@uwo.ca>>

Date: Thursday, March 1, 2018 at 2:09 PM
To: "hcp-users@humanconnectome.org 
<mailto:hcp-users@humanconnectome.org>" <mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Error in FreeSurfer processing 
"recon-all.v6.hires: command not found"


Hi all,

I am very new to the HCP pipeline for preprocessing the data and I am 
trying to adopt to the pipeline.
As the first step I am trying the execution of the pipeline with the 
example data that is given in the tutorial. The first part which is 
the PreFreeSurfer processing was completed successfully.


But I am having a problem with the second part, the FreeSurfer 
processing. When I start it, the process instantly comes to an end 
giving me the following error;


~/Pipelines/FreeSurfer/FreeSurferPipeline.sh: line 41: 
recon-all.v6.hires: command not found


I double check the freesurfer installation and I source it as 
explained as well. Does this error has something to do with the 
installation? How can I fix it?



Your help is much appreciated!


Regards,
Pubuditha


Western University
*Pubuditha Abeyasinghe*
PhD Candidate
Department of Physics and Astronomy
Brain and Mind Institute
Western University
London, ON, Canada
email: pabey...@uwo.ca <mailto:pabey...@uwo.ca>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Recon-all Errors in Freesurfer HCP pipeline

2018-07-13 Thread Timothy B. Brown
ght be found by other software
   using the libnetcdf library. Basically, you don't want to
   contaminate your standard library locations with old libraries.

   That is also why you should only set the LD_LIBRARY_PATH environment
   variable as shown above when you are running FreeSurfer v5.3.0-HCP.

Once you've installed version 6 of the libnetcdf library in a directory 
and made sure LD_LIBRARY_PATH points to that directory, try running the 
FreeSurferPipeline.sh script again.


Lastly, in the future please ask questions like this to the HCP-Users 
mailing list. That way other people can either benefit from the answer 
if it is correct or correct my answer if I've made mistakes. I've gone 
ahead and sent this response to that list and to your email address too 
because I don't know if you are subscribed to the list.


You can subscribe to the HCP-Users mailing list by visiting 
https://www.humanconnectome.org/contact-us. There is a place to join our 
two email lists, HCP-Announce and HCP-Users, on the right hand side of 
that page.


Hope that's helpful,

  Tim


On 2018-07-12 04:51 PM, Boukhdhir Amal wrote:

Hello Tim,


After running the PreFreesurferHCP pipeline in the HCP connectome 
course and reproducing that on my computer successfully,
I am getting thgis issue with the Freesurfer pipeline (as a second 
step in the structural pipeline).


I attached the reconn-all.log file and a screenshot of the error.

The freesurfer version I am using 
is:*freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0-HCP*

*
*
These are more informations related to the distribution I am using:
*
*
*
aboukhdhir@thuya:/mnt/home_sq/aboukhdhir$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/";
SUPPORT_URL="http://help.ubuntu.com/";
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/";
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

*
*
*
I have also tried the same pipeline on 2 other computers with 
different RAMs and I am getting the same error.

*
*
Do you please have an idea why I am getting this error and how can I 
solve it ?



Best,
Amal






--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] libnetcdf.so.6

2018-11-26 Thread Timothy B. Brown

Dr. Minami,

Please review the message previously sent to the HCP-Users list here: 
https://www.mail-archive.com/hcp-users@humanconnectome.org/msg06498.html


Hopefully you will find the solution there.

Best Regards,

  Tim

On 11/26/18 01:56, 南 修司郎 wrote:

Dear HCP experts,

I am trying to perform HCP pipeline for the WU-Minn HCP Lifespan Pilot 
Data. The following error occurred during the FreeSurfer processing.


mris_make_surfaces: error while loading shared libraries: 
libnetcdf.so.6: cannot open shared object file: No such file or directory


(standard_in) 2: Error: comparison in expression


I'd like to appreciate if you could explain the solution.

Best regards.

Shujiro Minami M.D., Ph.D.
Dpt. of Otolaryngology National Tokyo Medical Center
2-5-1 Higashigaoka, Meguro, Tokyo 152-8902, Japan
E-mail shujiromin...@me.com <mailto:shujiromin...@me.com>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Connectome Coordination Facility)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users