Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster
Hi Tim, that answers my question. I wondered if that '—runlocal’ option forced running the program in the cluster's login node or if it would still accept PBS job options. I’ll change the queuing_command variable then. As a new user of Pipelines and the cluster this has been the only confusing issue so far. Thanks! Carolina. On Feb 5, 2015, at 14:02, Timothy B. Brown wrote: > Hi Carolina, > > In the example scripts provided with the HCP Pipelines, the --runlocal flag > is intended to be an indication that you do not want to submit a job to a > scheduler or grid engine (whether PBS or SGE). If you specify the --runlocal > command line option when invoking one of the example scripts, then fsl_sub is > not used as part of the command that invokes the actual pipeline script. > > For example, if you would like to run the Pre-FreeSurfer pipeline, you might > do so by invoking the example batch script (or your modified version thereof) > in a way similar to the following: > > $ cd /Examples/Scripts > $ ./PreFreeSurferPipelineBatch.sh > > When invoked this way, the batch script will use the fsl_sub command to > submit the actual processing to be done via the PreFreeSurferPipeline.sh > script to a scheduler. > > If instead you want to run the processing "locally", then you might do this > by invoking the example batch script similar to the following: > > $ cd /Examples/Scripts > $ ./PreFreeSurferPipelineBatch.sh --runlocal > > To run "locally" means to not run the processing on a grid engine or cluster, > but instead to run right on whatever machine you are currently logged in to, > with your shell showing the output from the processing and your shell not > returning to the command prompt until the processing is complete. > > So, to submit things to a cluster/grid engine/scheduler, you would not want > to use the --runlocal option. > > If it turns out that the fsl_sub command doesn't work for you when submitting > jobs to the CHPC cluster, you can change the line in the batch script that > sets the queuing_command. > > queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" > > to something like: > > queuing_command="qsub ${QUEUE}" > > You may also have to change the definition of the QUEUE variable to name a > queue that is available for you to use. > > Hope that helps, > Tim > > On Thu, Feb 5, 2015, at 12:14, Malcolm Tobias wrote: >> >> fsl_sub should work with our cluster via PBS in almost all cases* >> >> Perhaps the authors of the PreFreeSurferPipelineBatch.sh script could >> comment on what the -runlocal flag is intended to do and whether it's been >> tested on our cluster? >> >> Malcolm >> >> *it doesn't properly support submitting to the GPU queues. >> >> On Thursday 05 February 2015 12:05:59 Ramirez San Martin, Carolina wrote: >> > I contacted Malcolm at the CHPC to ask about this already, and he told me >> > that fsl_sub had been somewhat customized. Yet, when I tried to execute >> > the scripts without the —runlocal option, I got an error and the job was >> > stopped. >> > >> > I guess i’m trying to figure out if the —runlocal flag is the best >> > solution in this context >> > >> > Thanks. >> > >> > >> > On Feb 5, 2015, at 11:53, Glasser, Matthew wrote: >> > >> > > This might be a better question for the mailing list of this specific >> > > cluster, but I believe the version of fsl_sub on the Wash U cluster still >> > > works with PBS (I helped modify it to do this some years ago). >> > > >> > > Peace, >> > > >> > > Matt. >> > > >> > > On 2/5/15, 11:48 AM, "Ramirez San Martin, Carolina" >> > > wrote: >> > > >> > >> Hi, >> > >> >> > >> I¹m starting to use HCP Pipelines in the WashU CHPC cluster, and got a >> > >> bit confused on the way jobs are submitted to it, so I¹d like to confirm >> > >> a few details about this. >> > >> As far as I could tell, since the cluster takes PBS jobs, and because >> > >> fsl_sub works only in Sun Grid Engines(?), the way to run a Pipelines >> > >> script >> > >> would be creating a PBS script that executes the Pipelines script with >> > >> the Œ‹runlocal¹ command line option. >> > >> For example, to run the PreFreeSurferPipelineBatch.sh script, the PBS >> > >> script command would be '.PreFreeSurferPipelineBatch.sh ‹runlocal' >> > >> >> > >>> From the code(*) I assume that given this command, the fsl_sub queuing >> > >>> is bypassed, and when the job is submitted to the cluster it will use >> > >>> the options specified in the PBS script. >> > >> Is that correct? >> > >> >> > >> Thanks, >> > >> Carolina. >> > >> >> > >> __ >> > >> (*) For example, these lines in PreFreeSurferPipelineBatch.sh: >> > >> "if [ -n "${command_line_specified_run_local}" ] ; then >> > >> echo "About to run >> > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" >> > >> queuing_command="" >> > >> else >> > >> echo "About to use fsl_sub to queue or run >> > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPip
Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster
Hi Carolina, In the example scripts provided with the HCP Pipelines, the --runlocal flag is intended to be an indication that you do **not** want to submit a job to a scheduler or grid engine (whether PBS or SGE). If you specify the --runlocal command line option when invoking one of the example scripts, then fsl_sub is not used as part of the command that invokes the actual pipeline script. For example, if you would like to run the Pre-FreeSurfer pipeline, you might do so by invoking the example batch script (or your modified version thereof) in a way similar to the following: $ cd **/Examples/Scripts $ ./PreFreeSurferPipelineBatch.sh When invoked this way, the batch script will use the fsl_sub command to submit the actual processing to be done via the PreFreeSurferPipeline.sh script to a scheduler. If instead you want to run the processing "locally", then you might do this by invoking the example batch script similar to the following: $ cd **/Examples/Scripts $ ./PreFreeSurferPipelineBatch.sh --runlocal To run "locally" means to not run the processing on a grid engine or cluster, but instead to run right on whatever machine you are currently logged in to, with your shell showing the output from the processing and your shell not returning to the command prompt until the processing is complete. So, to submit things to a cluster/grid engine/scheduler, you would not want to use the --runlocal option. If it turns out that the fsl_sub command doesn't work for you when submitting jobs to the CHPC cluster, you can change the line in the batch script that sets the queuing_command. queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" to something like: queuing_command="qsub ${QUEUE}" You may also have to change the definition of the QUEUE variable to name a queue that is available for you to use. Hope that helps, Tim On Thu, Feb 5, 2015, at 12:14, Malcolm Tobias wrote: > > fsl_sub should work with our cluster via PBS in almost all cases* > > Perhaps the authors of the PreFreeSurferPipelineBatch.sh script could > comment on what the -runlocal flag is intended to do and whether it's > been tested on our cluster? > > Malcolm > > *it doesn't properly support submitting to the GPU queues. > > On Thursday 05 February 2015 12:05:59 Ramirez San Martin, > Carolina wrote: > > I contacted Malcolm at the CHPC to ask about this already, and he > > told me that fsl_sub had been somewhat customized. Yet, when I tried > > to execute the scripts without the —runlocal option, I got an error > > and the job was stopped. > > > > I guess i’m trying to figure out if the —runlocal flag is the best > > solution in this context > > > > Thanks. > > > > > > On Feb 5, 2015, at 11:53, Glasser, Matthew > > wrote: > > > > > This might be a better question for the mailing list of this > > > specific > > > cluster, but I believe the version of fsl_sub on the Wash U > > > cluster still > > > works with PBS (I helped modify it to do this some years ago). > > > > > > Peace, > > > > > > Matt. > > > > > > On 2/5/15, 11:48 AM, "Ramirez San Martin, Carolina" > > > wrote: > > > > > >> Hi, > > >> > > >> I¹m starting to use HCP Pipelines in the WashU CHPC cluster, and > > >> got a > > >> bit confused on the way jobs are submitted to it, so I¹d like to > > >> confirm > > >> a few details about this. > > >> As far as I could tell, since the cluster takes PBS jobs, and > > >> because > > >> fsl_sub works only in Sun Grid Engines(?), the way to run a > > >> Pipelines > > >> script > > >> would be creating a PBS script that executes the Pipelines script > > >> with > > >> the Œ‹runlocal¹ command line option. > > >> For example, to run the PreFreeSurferPipelineBatch.sh script, the > > >> PBS > > >> script command would be '.PreFreeSurferPipelineBatch.sh > > >> ‹runlocal' > > >> > > >>> From the code(*) I assume that given this command, the fsl_sub > > >>> queuing > > >>> is bypassed, and when the job is submitted to the cluster it > > >>> will use > > >>> the options specified in the PBS script. > > >> Is that correct? > > >> > > >> Thanks, > > >> Carolina. > > >> > > >> __ > > >> (*) For example, these lines in PreFreeSurferPipelineBatch.sh: > > >> "if [ -n "${command_line_specified_run_local}" ] ; then > > >> echo "About to run > > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > > >> queuing_command="" > > >> else > > >> echo "About to use fsl_sub to queue or run > > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > > >> queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" > > >> fi" > > >> > > >> > > >> > > >> > > >> > > >> ___ > > >> HCP-Users mailing list > > >> HCP-Users@humanconnectome.org > > >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users > > > > > > -- > Malcolm Tobias > 314.362.1594 > > > ___ > HCP-Users mailing list > HCP-U
Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster
fsl_sub should work with our cluster via PBS in almost all cases* Perhaps the authors of the PreFreeSurferPipelineBatch.sh script could comment on what the -runlocal flag is intended to do and whether it's been tested on our cluster? Malcolm *it doesn't properly support submitting to the GPU queues. On Thursday 05 February 2015 12:05:59 Ramirez San Martin, Carolina wrote: > I contacted Malcolm at the CHPC to ask about this already, and he told me > that fsl_sub had been somewhat customized. Yet, when I tried to execute the > scripts without the —runlocal option, I got an error and the job was stopped. > > I guess i’m trying to figure out if the —runlocal flag is the best solution > in this context > > Thanks. > > > On Feb 5, 2015, at 11:53, Glasser, Matthew wrote: > > > This might be a better question for the mailing list of this specific > > cluster, but I believe the version of fsl_sub on the Wash U cluster still > > works with PBS (I helped modify it to do this some years ago). > > > > Peace, > > > > Matt. > > > > On 2/5/15, 11:48 AM, "Ramirez San Martin, Carolina" > > wrote: > > > >> Hi, > >> > >> I¹m starting to use HCP Pipelines in the WashU CHPC cluster, and got a > >> bit confused on the way jobs are submitted to it, so I¹d like to confirm > >> a few details about this. > >> As far as I could tell, since the cluster takes PBS jobs, and because > >> fsl_sub works only in Sun Grid Engines(?), the way to run a Pipelines > >> script > >> would be creating a PBS script that executes the Pipelines script with > >> the Œ‹runlocal¹ command line option. > >> For example, to run the PreFreeSurferPipelineBatch.sh script, the PBS > >> script command would be '.PreFreeSurferPipelineBatch.sh ‹runlocal' > >> > >>> From the code(*) I assume that given this command, the fsl_sub queuing > >>> is bypassed, and when the job is submitted to the cluster it will use > >>> the options specified in the PBS script. > >> Is that correct? > >> > >> Thanks, > >> Carolina. > >> > >> __ > >> (*) For example, these lines in PreFreeSurferPipelineBatch.sh: > >> "if [ -n "${command_line_specified_run_local}" ] ; then > >>echo "About to run > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > >>queuing_command="" > >> else > >>echo "About to use fsl_sub to queue or run > >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > >>queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" > >> fi" > >> > >> > >> > >> > >> > >> ___ > >> HCP-Users mailing list > >> HCP-Users@humanconnectome.org > >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users > > > -- Malcolm Tobias 314.362.1594 ___ HCP-Users mailing list HCP-Users@humanconnectome.org http://lists.humanconnectome.org/mailman/listinfo/hcp-users
Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster
I contacted Malcolm at the CHPC to ask about this already, and he told me that fsl_sub had been somewhat customized. Yet, when I tried to execute the scripts without the —runlocal option, I got an error and the job was stopped. I guess i’m trying to figure out if the —runlocal flag is the best solution in this context Thanks. On Feb 5, 2015, at 11:53, Glasser, Matthew wrote: > This might be a better question for the mailing list of this specific > cluster, but I believe the version of fsl_sub on the Wash U cluster still > works with PBS (I helped modify it to do this some years ago). > > Peace, > > Matt. > > On 2/5/15, 11:48 AM, "Ramirez San Martin, Carolina" > wrote: > >> Hi, >> >> I¹m starting to use HCP Pipelines in the WashU CHPC cluster, and got a >> bit confused on the way jobs are submitted to it, so I¹d like to confirm >> a few details about this. >> As far as I could tell, since the cluster takes PBS jobs, and because >> fsl_sub works only in Sun Grid Engines(?), the way to run a Pipelines >> script >> would be creating a PBS script that executes the Pipelines script with >> the Œ‹runlocal¹ command line option. >> For example, to run the PreFreeSurferPipelineBatch.sh script, the PBS >> script command would be '.PreFreeSurferPipelineBatch.sh ‹runlocal' >> >>> From the code(*) I assume that given this command, the fsl_sub queuing >>> is bypassed, and when the job is submitted to the cluster it will use >>> the options specified in the PBS script. >> Is that correct? >> >> Thanks, >> Carolina. >> >> __ >> (*) For example, these lines in PreFreeSurferPipelineBatch.sh: >> "if [ -n "${command_line_specified_run_local}" ] ; then >> echo "About to run >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" >> queuing_command="" >> else >> echo "About to use fsl_sub to queue or run >> ${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" >> queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" >> fi" >> >> >> >> >> >> ___ >> HCP-Users mailing list >> HCP-Users@humanconnectome.org >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users > ___ HCP-Users mailing list HCP-Users@humanconnectome.org http://lists.humanconnectome.org/mailman/listinfo/hcp-users
Re: [HCP-Users] PBS vs fsl_sub in CHPC cluster
This might be a better question for the mailing list of this specific cluster, but I believe the version of fsl_sub on the Wash U cluster still works with PBS (I helped modify it to do this some years ago). Peace, Matt. On 2/5/15, 11:48 AM, "Ramirez San Martin, Carolina" wrote: >Hi, > >I¹m starting to use HCP Pipelines in the WashU CHPC cluster, and got a >bit confused on the way jobs are submitted to it, so I¹d like to confirm >a few details about this. >As far as I could tell, since the cluster takes PBS jobs, and because >fsl_sub works only in Sun Grid Engines(?), the way to run a Pipelines >script >would be creating a PBS script that executes the Pipelines script with >the Œ‹runlocal¹ command line option. >For example, to run the PreFreeSurferPipelineBatch.sh script, the PBS >script command would be '.PreFreeSurferPipelineBatch.sh ‹runlocal' > >>From the code(*) I assume that given this command, the fsl_sub queuing >>is bypassed, and when the job is submitted to the cluster it will use >>the options specified in the PBS script. >Is that correct? > >Thanks, >Carolina. > >__ >(*) For example, these lines in PreFreeSurferPipelineBatch.sh: >"if [ -n "${command_line_specified_run_local}" ] ; then > echo "About to run >${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > queuing_command="" > else > echo "About to use fsl_sub to queue or run >${HCPPIPEDIR}/PreFreeSurfer/PreFreeSurferPipeline.sh" > queuing_command="${FSLDIR}/bin/fsl_sub ${QUEUE}" > fi" > > > > > >___ >HCP-Users mailing list >HCP-Users@humanconnectome.org >http://lists.humanconnectome.org/mailman/listinfo/hcp-users The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. ___ HCP-Users mailing list HCP-Users@humanconnectome.org http://lists.humanconnectome.org/mailman/listinfo/hcp-users