Re: [galaxy-dev] shall us specify which tool run in cluster?

2013-08-02 Thread Thon de Boer
HI Shenwiyn,

 

The definition of regularjobs etc. is there to allow each job to be run
under different environment on the cluster.

I am actually not using most of those definitions, except for the BWA tool,
which I want to run using 4 slots on our cluster so I use the destination
multicorejobs4.

 

The nativeSpecification are the options that you can give as if you were to
use the QSUB command directly to submit jobs to the cluster.

 

-V -q short.q -pe smp 1

Is what I normally use for the qsub command for a job that is fast, for
instance..

 

You don't need to specify the destination for each job, since the tool
section has a default, which is regularJobs in my case.

So only if you want to do something other than submit a regular job (which
only takes one slot) do you need to define something else, like I did for
BWA.

 

Hope that helps

 

Regards,

 

Thon

 

Thon deBoer Ph.D., Bioinformatics Guru 
California, USA |p: +1 (650) 799-6839  |m:   mailto:thondeb...@me.com
thondeb...@me.com

 

From: shenwiyn [mailto:shenw...@gmail.com] 
Sent: Tuesday, July 30, 2013 11:52 PM
To: Thon Deboer
Cc: galaxy-dev@lists.bx.psu.edu
Subject: shall us specify which tool run in cluster?

 

Hi Thon Deboer ,

I am a newer in Galaxy.I installed my Galaxy with Torque2.5.0 ,and Galaxy
uses the pbs_modoule to interface with TORQUE.But I have some question of
the  job_conf.xml :

1.)In your  job_conf.xml ,you use regularjobs,longjobs,shortjobs...to run
different jobs ,how our Galaxy know which tool belongs to regularjobs or
longjobs.And what is the meaning of nativeSpecification?

2.)Shall us use toolscollection of tool id=bwa_wrapper
destination=multicorejobs4/to specify bwa ?Does it mean the bwa belong to
multicorejobs4,and run in cluster?

3.)Does every tool need us to specify which job it belong to?

 I saw http://wiki.galaxyproject.org/Admin/Config/Jobs about this,but I am
not sure above.Could you help me please?

 

  _  

shenwiyn

 

From: Thon Deboer mailto:thondeb...@me.com 

Date: 2013-07-18 14:31

To: galaxy-dev mailto:galaxy-dev@lists.bx.psu.edu 

Subject: [galaxy-dev] Jobs remain in queue until restart

Hi,

 

I have noticed that from time to time, the job queue seems to be stuck and
can only be unstuck by restarting galaxy.

The jobs seem to be in the queue state and the python job handler processes
are hardly ticking over and the cluster is empty.

 

When I restart, the startup procedure realizes all jobs are in the a new
state and it then assigns a jobhandler after which the jobs start fine..

 

Any ideas?

Torque 

 

Thon

 

P.S I am using the june version of galaxy and I DO set limits on my users in
job_conf.xml as so: (Maybe it is related? Before it went into dormant mode,
this user had started lots of jobs and may have hit the limit, but I assumed
this limit was the number of running jobs at one time, right?)

 

?xml version=1.0?

job_conf

plugins workers=4

!-- workers is the number of threads for the runner's work queue.

 The default from plugins is used if not defined for a
plugin.

  --

plugin id=local type=runner
load=galaxy.jobs.runners.local:LocalJobRunner workers=2/

plugin id=drmaa type=runner
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner workers=8/

plugin id=cli type=runner
load=galaxy.jobs.runners.cli:ShellJobRunner workers=2/

/plugins

handlers default=handlers

!-- Additional job handlers - the id should match the name of a

 [server:id] in universe_wsgi.ini.

 --

handler id=handler0 tags=handlers/

handler id=handler1 tags=handlers/

handler id=handler2 tags=handlers/

handler id=handler3 tags=handlers/

!-- handler id=handler10 tags=handlers/

handler id=handler11 tags=handlers/

handler id=handler12 tags=handlers/

handler id=handler13 tags=handlers/

--

/handlers

destinations default=regularjobs

!-- Destinations define details about remote resources and how jobs

 should be executed on those remote resources.

 --

destination id=local runner=local/

destination id=regularjobs runner=drmaa tags=cluster

!-- These are the parameters for qsub, such as queue etc. --

param id=nativeSpecification-V -q long.q -pe smp 1/param

/destination

destination id=longjobs runner=drmaa tags=cluster,long_jobs

!-- These are the parameters for qsub, such as queue etc. --

param id=nativeSpecification-V -q long.q -pe smp 1/param

/destination

destination id=shortjobs runner=drmaa
tags=cluster,short_jobs

!-- These are the parameters for qsub, such as queue etc. --

param id=nativeSpecification-V -q short.q -pe smp 1/param

/destination

destination id=multicorejobs4 runner=drmaa
tags=cluster,multicore_jobs

!-- These are the parameters 

[galaxy-dev] shall us specify which tool run in cluster?

2013-07-31 Thread shenwiyn
Hi Thon Deboer ,
I am a newer in Galaxy.I installed my Galaxy with Torque2.5.0 ,and Galaxy uses 
the pbs_modoule to interface with TORQUE.But I have some question of the  
job_conf.xml :
1.)In your  job_conf.xml ,you use regularjobs,longjobs,shortjobs...to run 
different jobs ,how our Galaxy know which tool belongs to regularjobs or 
longjobs.And what is the meaning of nativeSpecification?
2.)Shall us use toolscollection of tool id=bwa_wrapper 
destination=multicorejobs4/to specify bwa ?Does it mean the bwa belong to 
multicorejobs4,and run in cluster?
3.)Does every tool need us to specify which job it belong to?
 I saw http://wiki.galaxyproject.org/Admin/Config/Jobs about this,but I am not 
sure above.Could you help me please?




shenwiyn

From: Thon Deboer
Date: 2013-07-18 14:31
To: galaxy-dev
Subject: [galaxy-dev] Jobs remain in queue until restart
Hi,
 
I have noticed that from time to time, the job queue seems to be “stuck” and 
can only be unstuck by restarting galaxy.
The jobs seem to be in the queue state and the python job handler processes are 
hardly ticking over and the cluster is empty.
 
When I restart, the startup procedure realizes all jobs are in the a “new 
state” and it then assigns a jobhandler after which the jobs start fine….
 
Any ideas?
 Torque 
 
Thon
 
P.S I am using the june version of galaxy and I DO set limits on my users in 
job_conf.xml as so: (Maybe it is related? Before it went into dormant mode, 
this user had started lots of jobs and may have hit the limit, but I assumed 
this limit was the number of running jobs at one time, right?)
 
?xml version=1.0?
job_conf
plugins workers=4
!-- workers is the number of threads for the runner's work queue.
 The default from plugins is used if not defined for a plugin.
  --
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner workers=2/
plugin id=drmaa type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner workers=8/
plugin id=cli type=runner 
load=galaxy.jobs.runners.cli:ShellJobRunner workers=2/
/plugins
handlers default=handlers
!-- Additional job handlers - the id should match the name of a
 [server:id] in universe_wsgi.ini.
 --
handler id=handler0 tags=handlers/
handler id=handler1 tags=handlers/
handler id=handler2 tags=handlers/
handler id=handler3 tags=handlers/
!-- handler id=handler10 tags=handlers/
handler id=handler11 tags=handlers/
handler id=handler12 tags=handlers/
handler id=handler13 tags=handlers/
--
/handlers
destinations default=regularjobs
!-- Destinations define details about remote resources and how jobs
 should be executed on those remote resources.
 --
destination id=local runner=local/
destination id=regularjobs runner=drmaa tags=cluster
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 1/param
/destination
destination id=longjobs runner=drmaa tags=cluster,long_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 1/param
/destination
destination id=shortjobs runner=drmaa tags=cluster,short_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q short.q -pe smp 1/param
/destination
destination id=multicorejobs4 runner=drmaa 
tags=cluster,multicore_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 4/param
/destination
 
!-- destination id=real_user_cluster runner=drmaa
param 
id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
param 
id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
param 
id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
/destination --
 
destination id=dynamic runner=dynamic
!-- A destination that represents a method in the dynamic runner. 
--
param id=typepython/param
param id=functioninteractiveOrCluster/param
/destination
/destinations
tools
!-- Tools can be configured to use specific destinations or handlers,
 identified by either the id or tags attribute.  If assigned to
 a tag, a handler or destination that matches that tag will be
 chosen at random.
 --
tool id=bwa_wrapper destination=multicorejobs4/
/tools
limits
!-- Certain limits can be defined.
limit type=registered_user_concurrent_jobs500/limit
limit type=unregistered_user_concurrent_jobs1/limit
limit type=concurrent_jobs id=local1/limit

Re: [galaxy-dev] shall us specify which tool run in cluster?

2013-07-31 Thread Ido Tamir

On Jul 31, 2013, at 8:52 AM, shenwiyn shenw...@gmail.com wrote:

 Hi Thon Deboer ,
 I am a newer in Galaxy.I installed my Galaxy with Torque2.5.0 ,and Galaxy 
 uses the pbs_modoule to interface with TORQUE.But I have some question of the 
  job_conf.xml :
 1.)In your  job_conf.xml ,you use regularjobs,longjobs,shortjobs...to run 
 different jobs ,how our Galaxy know which tool belongs to regularjobs or 
 longjobs.And what is the meaning of nativeSpecification?

by specifying, as Thon did, in tools the id of the tool and its destination 
which are the settings.
the nativeSpecification allows you to set additional parameters that are passed 
with the call.
e.g.  -pe smp 4 tells the grid engine to use the parallel environment smp with 
4 cores.

 2.)Shall us use toolscollection of tool id=bwa_wrapper 
 destination=multicorejobs4/to specify bwa ?Does it mean the bwa belong to 
 multicorejobs4,and run in cluster?

exactly

 3.)Does every tool need us to specify which job it belong to?
  I saw http://wiki.galaxyproject.org/Admin/Config/Jobs about this,but I am 
 not sure above.Could you help me please?
  

fortunately there is a default 


 shenwiyn
  
 From: Thon Deboer
 Date: 2013-07-18 14:31
 To: galaxy-dev
 Subject: [galaxy-dev] Jobs remain in queue until restart
 Hi,
  
 I have noticed that from time to time, the job queue seems to be “stuck” and 
 can only be unstuck by restarting galaxy.
 The jobs seem to be in the queue state and the python job handler processes 
 are hardly ticking over and the cluster is empty.
  
 When I restart, the startup procedure realizes all jobs are in the a “new 
 state” and it then assigns a jobhandler after which the jobs start fine….
  
 Any ideas?
  Torque
  
 Thon
  
 P.S I am using the june version of galaxy and I DO set limits on my users in 
 job_conf.xml as so: (Maybe it is related? Before it went into dormant mode, 
 this user had started lots of jobs and may have hit the limit, but I assumed 
 this limit was the number of running jobs at one time, right?)
  
 ?xml version=1.0?
 job_conf
 plugins workers=4
 !-- workers is the number of threads for the runner's work queue.
  The default from plugins is used if not defined for a plugin.
   --
 plugin id=local type=runner 
 load=galaxy.jobs.runners.local:LocalJobRunner workers=2/
 plugin id=drmaa type=runner 
 load=galaxy.jobs.runners.drmaa:DRMAAJobRunner workers=8/
 plugin id=cli type=runner 
 load=galaxy.jobs.runners.cli:ShellJobRunner workers=2/
 /plugins
 handlers default=handlers
 !-- Additional job handlers - the id should match the name of a
  [server:id] in universe_wsgi.ini.
  --
 handler id=handler0 tags=handlers/
 handler id=handler1 tags=handlers/
 handler id=handler2 tags=handlers/
 handler id=handler3 tags=handlers/
 !-- handler id=handler10 tags=handlers/
 handler id=handler11 tags=handlers/
 handler id=handler12 tags=handlers/
 handler id=handler13 tags=handlers/
 --
 /handlers
 destinations default=regularjobs
 !-- Destinations define details about remote resources and how jobs
  should be executed on those remote resources.
  --
 destination id=local runner=local/
 destination id=regularjobs runner=drmaa tags=cluster
 !-- These are the parameters for qsub, such as queue etc. --
 param id=nativeSpecification-V -q long.q -pe smp 1/param
 /destination
 destination id=longjobs runner=drmaa tags=cluster,long_jobs
 !-- These are the parameters for qsub, such as queue etc. --
 param id=nativeSpecification-V -q long.q -pe smp 1/param
 /destination
 destination id=shortjobs runner=drmaa tags=cluster,short_jobs
 !-- These are the parameters for qsub, such as queue etc. --
 param id=nativeSpecification-V -q short.q -pe smp 1/param
 /destination
 destination id=multicorejobs4 runner=drmaa 
 tags=cluster,multicore_jobs
 !-- These are the parameters for qsub, such as queue etc. --
 param id=nativeSpecification-V -q long.q -pe smp 4/param
 /destination
  
 !-- destination id=real_user_cluster runner=drmaa
 param 
 id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
 param 
 id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
 param 
 id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
 /destination --
  
 destination id=dynamic runner=dynamic
 !-- A destination that represents a method in the dynamic 
 runner. --
 param id=typepython/param
 param id=functioninteractiveOrCluster/param
 /destination
 /destinations
 tools
 !-- Tools can be configured to use specific destinations or 

[galaxy-dev] shall us specify which tool run in cluster?

2013-07-26 Thread shenwiyn
Hi Thon Deboer ,
I am a newer in Galaxy.I installed my Galaxy with Torque2.5.0 ,and Galaxy uses 
the pbs_modoule to interface with TORQUE.But I have some question of the  
job_conf.xml :
1.)In your  job_conf.xml ,you use regularjobs,longjobs,shortjobs...to run 
different jobs ,how our Galaxy know which tool belongs to regularjobs or 
longjobs.And what is the meaning of nativeSpecification?
2.)Shall us use toolscollection of tool id=bwa_wrapper 
destination=multicorejobs4/to specify bwa ?Does it mean the bwa belong to 
multicorejobs4,and run in cluster?
3.)Does every tool need us to specify which job it belong to?
 I saw http://wiki.galaxyproject.org/Admin/Config/Jobs about this,but I am not 
sure above.Could you help me please?




shenwiyn

From: Thon Deboer
Date: 2013-07-18 14:31
To: galaxy-dev
Subject: [galaxy-dev] Jobs remain in queue until restart
Hi,
 
I have noticed that from time to time, the job queue seems to be “stuck” and 
can only be unstuck by restarting galaxy.
The jobs seem to be in the queue state and the python job handler processes are 
hardly ticking over and the cluster is empty.
 
When I restart, the startup procedure realizes all jobs are in the a “new 
state” and it then assigns a jobhandler after which the jobs start fine….
 
Any ideas?
 Torque 
 
Thon
 
P.S I am using the june version of galaxy and I DO set limits on my users in 
job_conf.xml as so: (Maybe it is related? Before it went into dormant mode, 
this user had started lots of jobs and may have hit the limit, but I assumed 
this limit was the number of running jobs at one time, right?)
 
?xml version=1.0?
job_conf
plugins workers=4
!-- workers is the number of threads for the runner's work queue.
 The default from plugins is used if not defined for a plugin.
  --
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner workers=2/
plugin id=drmaa type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner workers=8/
plugin id=cli type=runner 
load=galaxy.jobs.runners.cli:ShellJobRunner workers=2/
/plugins
handlers default=handlers
!-- Additional job handlers - the id should match the name of a
 [server:id] in universe_wsgi.ini.
 --
handler id=handler0 tags=handlers/
handler id=handler1 tags=handlers/
handler id=handler2 tags=handlers/
handler id=handler3 tags=handlers/
!-- handler id=handler10 tags=handlers/
handler id=handler11 tags=handlers/
handler id=handler12 tags=handlers/
handler id=handler13 tags=handlers/
--
/handlers
destinations default=regularjobs
!-- Destinations define details about remote resources and how jobs
 should be executed on those remote resources.
 --
destination id=local runner=local/
destination id=regularjobs runner=drmaa tags=cluster
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 1/param
/destination
destination id=longjobs runner=drmaa tags=cluster,long_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 1/param
/destination
destination id=shortjobs runner=drmaa tags=cluster,short_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q short.q -pe smp 1/param
/destination
destination id=multicorejobs4 runner=drmaa 
tags=cluster,multicore_jobs
!-- These are the parameters for qsub, such as queue etc. --
param id=nativeSpecification-V -q long.q -pe smp 4/param
/destination
 
!-- destination id=real_user_cluster runner=drmaa
param 
id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
param 
id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
param 
id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
/destination --
 
destination id=dynamic runner=dynamic
!-- A destination that represents a method in the dynamic runner. 
--
param id=typepython/param
param id=functioninteractiveOrCluster/param
/destination
/destinations
tools
!-- Tools can be configured to use specific destinations or handlers,
 identified by either the id or tags attribute.  If assigned to
 a tag, a handler or destination that matches that tag will be
 chosen at random.
 --
tool id=bwa_wrapper destination=multicorejobs4/
/tools
limits
!-- Certain limits can be defined.
limit type=registered_user_concurrent_jobs500/limit
limit type=unregistered_user_concurrent_jobs1/limit
limit type=concurrent_jobs id=local1/limit