Hello,
My problem is solved. Here is a solution :
my $output_galaxy = $ARGV[2];
.
if (! -e $output_perl_file) {print STDERR no output file\n;}
else {`(cp -a $output_perl_file $output_galaxy) ./error.log 21`}
And an
Hi All,
Maybe I missed it on the list here, but it seems that for samtools the pileup
command has been deprecated without backward compatibility from the more recent
samtools packages...
http://massgenomics.org/2012/03/5-things-to-know-about-samtools-mpileup.html
Did anyone rewrite some of the
Update: adaptation of a default run bam-pileup on my small test dataset gives
the same result in my case using mpileup instead of pileup in the sam_pileup.py
script (line 97).
Not tested any additional parameters though so there might be subtle
differences in the outcome.
#prepare basic
Hmm, this appears to be an indexing error, but there are many potential causes
so that doesn't narrow down the issue much.
Because you're using galaxy-dist, the simplest thing to do is wait for our
upcoming distribution (which should happen in the next couple days), upgrade
and see if that
Hi all,
I'm new to Galaxy- so hello, and excuse me while I get to grips with all
the conventions!
I've successfully got my local Galaxy (galaxy-central) install set up
and working correctly, as well as a local toolshed. I've also linked the
two with a line in tool_sheds_conf.xml. But I'm
Excellent. Thanks, Jeremy!
-b
On Fri, Jul 20, 2012 at 8:37 AM, Jeremy Goecks jeremy.goe...@emory.edu wrote:
Brian,
For the
initial trinity de novo assembly, I took Jeremy's initial workflow and
tweaked it to work with the latest release - and submitted it to the
galaxy tool shed.
Ok. Thanks for your clarifications. So I will wait for your upcoming
distribution of Galaxy and redo the test.
Best,
Amine
2012/7/20 Jeremy Goecks jeremy.goe...@emory.edu
Hmm, this appears to be an indexing error, but there are many potential
causes so that doesn't narrow down the issue much.
Hi,
I wrote a pipeline (xml attached) that, from what I can gather,
succeeds, but galaxy shows it as an error and doesn't make the output
file accessible as a new data set.
From the server log, I can see that the command line is being
constructed correctly, and it even indicates that it's
Hi Jon,
It looks like your tool shed is not returning the information necessary for
installing your tool shed repository into your local Galaxy instance. Try
putting the following print statement just before line 978 in
~/lib/galaxy/web/controllers/admin_toolshed.py. This will tell you why
Hi Greg,
Thanks very much for the reply. I've actually just now worked around
this- it was due somehow to our varnish/apache configs.
I'll explain our setup and how I fixed this in case anyone else gets the
same error. For context our setup is a number of virtual machines:
* Varnish
Brian;
I wrote a pipeline (xml attached) that, from what I can gather,
succeeds, but galaxy shows it as an error and doesn't make the output
file accessible as a new data set.
Is it possible the software is writing to standard error? Galaxy doesn't
check status codes, but rather check for
Thanks Brad and Nicole! This definitely explains it. The stderr
(which almost all my tools generate for monitoring purposes) was
resulting in galaxy thinking the process failed.
All is well and good now. HUGE THANKS!
-brian
On Fri, Jul 20, 2012 at 1:53 PM, Nicole Rockweiler
Is there a way to configure a tool downloaded from the toolshed depot to use a
job runner other than the local runner? The tool_id for the toolshed tool
isn't honored in universe_wsgi.ini the way the default tools are.
Specifically, has anyone configured a toolshed tool to use drmaa instead
Hi David,
I'm not familiar enough with the job runners to know what the problem may be,
However, for tools installed from the tool shed, the tool id is the tool-shed
generated guid instead of the id attribute value of the tool config's
tool tag. So, for example, if you install the freebayes
14 matches
Mail list logo