hi martin, thanks for the immediate follow-up!
> Our aim at using the API is the simplicity it provides.
part of my rationale in looking at the API for reporting purposes was
stability: my impression is that the galaxy-internal database schema
undergoes relatively frequent revisions, but i would
colleagues,
in our adaptation of galaxy for large-scale natural language
processing, a fairly common use pattern is to invoke a workflow on a
potentially large number of text files. hence, i am wondering about
facilities for uploading an archive (in ‘.zip’ or ‘.tgz’ format, say)
containing severa
dear keith,
in our work on building the Linguistic Analysis Portal (LAP) at the
University of Oslo, we have had to confront the same two issues you
mention. our general approach has been to separate LAP-specific code
(and specifications) from the Galaxy tree as much as possible.
for example, we
‘--mme-per-cpu’).
once i understand things better, i would of course be happy to
contribute a summary for the galaxy wiki. for all i can see, current
documentation does not cover job configuration and job resources in
full detail.
with thanks in advance, oe
On Sun, Mar 13, 2016 at 2:31 PM, Stephan
many thanks for taking the time to answer my query, gildas!
> In your job_conf.xml, you can set per tool a destination.
i had realized that much (sending some of our tools to SLURM, running
others on the local node), but i had failed to realize that one can of
course have /multiple/ SLURM destina
dear colleagues,
at the university of oslo, we develop a galaxy-based portal for
natural language processing (LAP: Language Analysis Portal). jobs are
submitted to a compute cluster via DRMAA and SLURM. current
development is against the galaxy release of march 2015.
i am wondering about fine-g