Ok; it's quite weird. Perhaps a Galaxy guru could give you a better answer
but I remembered that I had this kind of issues a while ago. In the
meantime, you could take a look at these parameters:
$ grep retry galaxy.ini.sample
# these instances, you can choose to retry setting it internally or lea
Thanks, Remy. I went through the cluster documentation and our Rocks
environment seems to be configured properly, after all.
It appears that my issue may be related to the UCSC Main table browser.
The jobs that Galaxy reports have failed are leaving the
job_working_directory behind, with galaxy_#
I forgot to point out the needs of sharing folders and checking the UID/GID
of the galaxy user between your systems (and his access to SGE).
Remy
2016-01-20 16:00 GMT+01:00 Rémy Dernat :
> Hi Eric,
>
> Here we use both solutions: Galaxy and RocksCluster. In Galaxy, you have
> to define your jobs
Hi Eric,
Here we use both solutions: Galaxy and RocksCluster. In Galaxy, you have to
define your jobs in "config/job_conf.xml" and you should probably source a
file (search for "environment" in your galaxy.ini) before the submit
process. In fact, you could have to set a DRMAA_LIBRARY_PATH to load
I am trying to get a Galaxy instance running on a Rocks cluster. I am able
to run jobs with the local runner at this point, but I am having an issue
with the drmaa runner that I haven't been able to fix. When I submit a job
in Galaxy it is successfully submitted to the cluster and runs to
complet