Hi,
I have configured a new galaxy-project site with SLURM (version 14). I have one
server with a Galaxy-Project instance, one node with SLURM server and two SLURM
worker nodes. I have compile SLURM-DRMAA from source codes. When I run
“drmaa-run /bin/hostname” it’s work. But, when I try to
Solved!
The problem was I have configured the job_conf.xml two plugins entry with
DRMAA. I have deleted the entry:
plugin id=slurm type=runner
load=galaxy.jobs.runners.slurm:SlurmJobRunner
And now works!
Thanks
El 25/09/2014, a las 08:12, Pardo Diaz, Alfonso
Hi all,
I've configured galaxy with the drama python module. This is really more of a
drama question...
We have no default queue set in moab and I can't seem to find a way to specify
a queue in the docs I've been looking at here -
http://drmaa-python.readthedocs.org/en/latest/tutorials.html
Hi Donny,
You should be able to specify the queue using the nativeSpecification field
of drmaa requests, e.g. in your job_conf.xml:
destination id=batch runner=pbs_drmaa
param id=nativeSpecification-q batch/param
/destination
Documentation on job_conf.xml's syntax by runner can be found
I feel like someone should respond to this but I must admin I don't
have a lot of ideas.
I assume you are able to use qsub to submit jobs from the Galaxy
server? This is worth verifying that before anything else. If that
doesn't work - the system configuration needs to be modified.
I think there
Does anyone have any tips about this, please :)?
Regards
From: galaxy-dev-boun...@lists.bx.psu.edu
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Hakeem Almabrazi
Sent: Monday, April 21, 2014 3:49 PM
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] DRMAA configuring issue
Hi
Hi,
I am trying to get DRMAA runner working for my local galaxy cluster. However,
I am having hard time configuring it my system.
So far,
I have installed Torque 2.5.12 and it seems to work as expected.
I installed drmaa_1.0.17 and here is DRMAA_LIBRARY_PATH
Hi,
since a few days there are frequent error messages popping up in my
Galaxy log files.
galaxy.jobs.runners ERROR 2014-01-17 13:11:17,094 Unhandled exception
checking active jobs
Traceback (most recent call last):
File
/usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py,
line
It is hard to test error states... but I assume you have the setup for
it :). Any chance you can apply these patches and let me know if they
fix the problem? I assume they will.
https://bitbucket.org/galaxy/galaxy-central/pull-request/300/potential-drmaa-fixes
-John
On Fri, Jan 17, 2014 at 6:15
Thanks John,
that fixed it for me!
Have a nice weekend,
Bjoern
It is hard to test error states... but I assume you have the setup for
it :). Any chance you can apply these patches and let me know if they
fix the problem? I assume they will.
Hello all,
On our main Galaxy tracking galaxy-dist using DRMAA/SGE,
jobs submitted to the cluster and queued and waiting (qw)
are correctly shown in Galaxy as grey pending entries in
the history.
With my test instance tracking galaxy-central (along with a
new visual look and new icons), such
I've had success using the pbs runner rather than drmaa runner for this
case. It's quite straightforward to specify the pbs_server for the pbs
runner. Works just as the documentation indicates.
- Bart
On Tue, Jul 9, 2013 at 1:46 PM, Bart Gottschalk bgott...@umn.edu wrote:
I haven't been
I haven't been able to find a way to make the drmaa runner work in this
situation. I'm going to move on to trying this with a pbs runner instead.
I will post to this thread if this works for me.
- Bart
___
Please keep all replies on the
Is it possible to specify the torque host as part of a DRMAA runner URL? I
haven't been able to find a *native_options *parameter to allow for this.
I'm using the old style cluster configuration.
*drmaa://[native_options]/*
*
*
Also, I haven't been able to find a list of native_options
Is it possible to specify the torque host as part of a DRMAA runner URL? I
haven't been able to find a *native_options *parameter to allow for this.
I'm using the old style cluster configuration.
*drmaa://[native_options]/*
*
*
Also, I haven't been able to find a list of native_options
Bart,
I believe drmaa://-q somehost@queue-name
will work. However I could be very wrong. It has been a while since I
messed with the actual drmaa runners.
--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences
Research Computing Support
Hi,
Our galaxy instance runs jobs in a SGE cluster using 2 job-handlers. The
SGE cluster uses a Job Submission Verifier (JSV) that rejects any job
submission that specify core
binding strategies.
When Galaxy starts, the first jobs we submit works perfectly:
First job submission:
. So I am
afraid to update our production instance.
Thanks in advance,
Liisa
From: Kyle Ellrott kellr...@soe.ucsc.edu
To: Nate Coraor n...@bx.psu.edu
Cc: galaxy-dev@lists.bx.psu.edu galaxy-dev@lists.bx.psu.edu
Date: 10/01/2013 07:44 PM
Subject:Re: [galaxy-dev] DRMAA runner
I'm running a test Galaxy system on a cluster (merged galaxy-dist on
Janurary 4th). And I've noticed some odd behavior from the DRMAA job
runner.
I'm running a multithread system, one web server, one job_manager, and
three job_handlers. DRMAA is the default job runner (the command for
tophat2 is
On Nov 20, 2012, at 8:15 AM, Peter Cock wrote:
On Thu, Nov 15, 2012 at 11:21 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
On Thu, Nov 15, 2012 at 10:12 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
On Thu, Nov 15, 2012 at 10:06 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
Hi
On Nov 27, 2012, at 12:03 PM, Peter Cock wrote:
On Tue, Nov 27, 2012 at 4:50 PM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 20, 2012, at 8:15 AM, Peter Cock wrote:
Is anyone else seeing this? I am wary of applying the update to our
production Galaxy until I know how to resolve this (other
On Tue, Nov 27, 2012 at 5:19 PM, Nate Coraor n...@bx.psu.edu wrote:
So a little defensive coding could prevent the segfault then (leaving
the separate issue of why the jobs lack this information)?
Indeed, I pushed a check for this in 4a95ae9a26d9.
Great. That will help.
This was a week
On Thu, Nov 15, 2012 at 11:21 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
On Thu, Nov 15, 2012 at 10:12 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
On Thu, Nov 15, 2012 at 10:06 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
Hi all,
Something has changed in the job handling, and
Hi all,
Something has changed in the job handling, and in a bad way. On my
development machine submitting jobs to the cluster didn't seem to be
working anymore (never sent to SGE). I killed Galaxy and restarted:
Starting server in PID 12180.
serving on http://127.0.0.1:8081
On Thu, Nov 15, 2012 at 10:06 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
Hi all,
Something has changed in the job handling, and in a bad way. On my
development machine submitting jobs to the cluster didn't seem to be
working anymore (never sent to SGE). I killed Galaxy and restarted:
On Thu, Nov 15, 2012 at 10:12 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
On Thu, Nov 15, 2012 at 10:06 AM, Peter Cock p.j.a.c...@googlemail.com
wrote:
Hi all,
Something has changed in the job handling, and in a bad way. On my
development machine submitting jobs to the cluster didn't
On Tue, Sep 18, 2012 at 7:11 PM, Scott McManus scottmcma...@gatech.edu wrote:
Sorry - that's changeset 7714:3f12146d6d81
-Scott
Hi Scott,
The good news is this error does seem to be fixed as of that commit:
TypeError: check_tool_output() takes exactly 5 arguments (4 given)
The bad news is
Odd, it works for me on EC2/Cloudman.
jorrit
On 09/19/2012 03:29 PM, Peter Cock wrote:
On Tue, Sep 18, 2012 at 7:11 PM, Scott McManus scottmcma...@gatech.edu wrote:
Sorry - that's changeset 7714:3f12146d6d81
-Scott
Hi Scott,
The good news is this error does seem to be fixed as of that
Hi all (and in particular, Scott),
I've just updated my development server and found the following
error when running jobs on our SGE cluster via DRMMA:
galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
finish method failed
Traceback (most recent call last):
File
I'll check it out. Thanks.
- Original Message -
Hi all (and in particular, Scott),
I've just updated my development server and found the following
error when running jobs on our SGE cluster via DRMMA:
galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
finish
I have to admit that I'm a little confused as to why you would
be getting this error at all - the job variable is introduced
at line 298 in the same file, and it's used as the last variable
to check_tool_output in the changeset you pointed to.
(Also, thanks for pointing to it - that made
Is it possible that you are looking at different classes? TaskWrapper's
finish method does not use the job variable in my recently merged code
either (line ~1045), while JobWrapper's does around line 315.
cheers,
jorrit
On 09/18/2012 03:55 PM, Scott McManus wrote:
I have to admit that I'm
Thanks, Jorrit! That was a good catch. Yes, it's a problem with the TaskWrapper.
I'll see what I can do about it.
-Scott
- Original Message -
Is it possible that you are looking at different classes?
TaskWrapper's
finish method does not use the job variable in my recently merged
On Tue, Sep 18, 2012 at 3:09 PM, Jorrit Boekel
jorrit.boe...@scilifelab.se wrote:
Is it possible that you are looking at different classes? TaskWrapper's
finish method does not use the job variable in my recently merged code
either (line ~1045), while JobWrapper's does around line 315.
Ok - that change was made. The difference is that the change
is applied to the task instead of the job. It's in changeset
7713:bfd10aa67c78, and it ran successfully in my environments
on local, pbs, and drmaa runners. Let me know if there are
any problems.
Thanks again for your patience.
Sorry - that's changeset 7714:3f12146d6d81
-Scott
- Original Message -
Ok - that change was made. The difference is that the change
is applied to the task instead of the job. It's in changeset
7713:bfd10aa67c78, and it ran successfully in my environments
on local, pbs, and drmaa
Le 27/03/2012 11:03, Louise-Amélie Schmitt a écrit :
Le 26/03/2012 16:13, Nate Coraor a écrit :
On Mar 26, 2012, at 5:11 AM, Louise-Amélie Schmitt wrote:
Hello everyone,
I wanted to start the drmaa job runner and followed the instructions
in the wiki, but I have this error message when I
Hello everyone,
I wanted to start the drmaa job runner and followed the instructions in
the wiki, but I have this error message when I start Galaxy:
galaxy.jobs ERROR 2012-03-23 15:28:49,845 Job runner is not loadable:
galaxy.jobs.runners. drmaa
Traceback (most recent call last):
File
Figured it out. It was an error introduced while resolving version control
conflicts.
--
Shantanu
On Jan 29, 2012, at 9:12 PM, Shantanu Pavgi wrote:
I am getting following error with the latest galaxy-dist revision
'26920e20157f' update. The Python version is 2.6.6.
{{{
I am getting following error with the latest galaxy-dist revision
'26920e20157f' update. The Python version is 2.6.6.
{{{
galaxy.jobs.runners.drmaa ERROR 2012-01-29 21:00:28,577 Uncaught exception
queueing job
Traceback (most recent call last):
File
BUMP
Does anyone have any idea on this? Our Galaxy is currently out of action
until this is sorted.
Thanks,
Chris
On 22/08/11 10:08, Chris Cole wrote:
Hi,
Following a recent update to our SGE, DRMAA is failing to load in
galaxy. The reason being that the path has changed. How do I change
Hi Chris,
Take a look at this file :
lib/galaxy/jobs/runners/drmaa.py
in your galaxy directory.
You can export a path for binaries here. For example :
export PATH=$PATH:/opt/bin/
before export PYTHONPATH
Moreover, do not forget other path values in your service script or in your
galaxy user
Hi,
Following a recent update to our SGE, DRMAA is failing to load in
galaxy. The reason being that the path has changed. How do I change the
path for galaxy to find the libdrmaa module?
Cheers,
Chris
--
Dr Chris Cole
Senior Research Associate (Bioinformatics)
College of Life Sciences
From: ambarish biswas [ambarishbis...@gmail.com]
Sent: July 27, 2011 4:18 PM
To: Ka Ming Nip
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] DRMAA options for SGE
Hi Ming,
just an idea, -w n might be hiding the error reporting. Is your jobs
getting
@lists.bx.psu.edu
Subject: Re: [galaxy-dev] DRMAA options for SGE
Hi Ming,
just an idea, -w n might be hiding the error reporting. Is your jobs
getting submitted and executed correctly?
For the queuing , you can add galaxy at the end, makes it
drmaa://-w n -l mem_free=1G -l mem_token=1G -l
From: galaxy-dev-boun...@lists.bx.psu.edu [galaxy-dev-boun...@lists.bx.psu.edu]
On Behalf Of Ka Ming Nip [km...@bcgsc.ca]
Sent: July 26, 2011 10:39 AM
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] DRMAA options for SGE
Hi,
I am trying to configure the proper memory resource
Hi,
I am trying to configure the proper memory resource requests for my Galaxy tool.
This is what I have under the tool_runners section of universe_wsgi.ini
[galaxy:tool_runners]
...
mytoolname = drmaa://-l mem_free=1G -l mem_token=1G -l h_vmem=1G/
...
When I execute my tool on Galaxy, I get
Hi,
I'm working on an local installation of galaxy using torque with drmaa
(the pbs-torque scramble failed). The torque-drmaa works fine so far,
except for one issue.
I'd like to specify some tool-dependent requirements from the
tool_runners section in universe.wsgi.ini. For now I've been
Hi
default_cluster_job_runner = drmaa://-q srpipeline -P pipeline/
works for me on LSF, so your syntax seems to be correct.
Assuming that -l mem=4gb:nodes=1:ppn=6 works teh way you expect when you
start the jobs on your cluster from the shell, read on...
Bearing in mind the the value of the
49 matches
Mail list logo