Richard,
You're correct in that currently the workflow API affords no method for runtime
modification of tool parameters, other than inputs. Depending on your needs,
it might be feasible to have a few static workflows that you reuse often via
the workflow API. If that isn't the case, and you
Hello,
I was wondering what would be the best way to extend Galaxy's API
functionality to allow for runtime modification of tool parameters?
I have successfully been able to run workflows programmatically using
the API, following the basic steps in:
scripts/api/execute_workflow.py.
scripts/api/exa
Thanx guys, that should give me heaps to go on with :-)
--Russell
> -Original Message-
> From: Nate Coraor [mailto:n...@bx.psu.edu]
> Sent: Tuesday, 6 December 2011 8:21 a.m.
> To: Ross
> Cc: Smithies, Russell; galaxy-dev@lists.bx.psu.edu
> Subject: Re: [galaxy-dev] possibly weird config
Hi David,
There should be additional information in the galaxy database about why the job
failed; take a look at stderr column of the failed job using some SQL like this:
--
select * from job where state='error' and tool_id='tophat' and stderr like
'%indexing reference%' order by id desc;
--
I am unable to get VCF files to display in IGV in my local galaxy
installation. BAM files display fine, however, whenever I click on the
link to display a VCF file I get the following error message:
"You must import this dataset into your current history before you
can view it at the desire
Thank you so much Dannon.
I appreciate your help but I need some more clarification.
in the tool I am integrating (which is java classes) if the input parameters are
./main.bash eps0.3_40reads.fa population10_ref.fa 15 6 120
Then teh output is in eps0.3_40reads_I_6_15_CNTGS_DIST0_EM20.txt
As
Hi Nate,
Here's the output of the migration:
==
galaxy@galaxy2:~/prog/galaxy-2011-12-5$ sh manage_db.sh upgrade
79 -> 80...
Migration script to create tables for disk quotas.
done
80 -> 81...
Migration script to add a 'tool_version' column to the hda/ldda tables.
done
81 -> 82...
Migration s
On Dec 5, 2011, at 3:02 PM, Leon Mei wrote:
> Dear all,
>
> Today I upgraded our NBIC server from an older release (fetched on July 5th,
> 2011) to the latest version in galaxy-dist. After execute "hg update" and "sh
> manage_db upgrade" and merge some local configurations, I successfully
> re
Dear all,
Today I upgraded our NBIC server from an older release (fetched on July
5th, 2011) to the latest version in galaxy-dist. After execute "hg update"
and "sh manage_db upgrade" and merge some local configurations, I
successfully restarted the server. However once I access the server from a
Hi Carlos,
Are you using a version of GATK that is 1.3?
Another possibility is that you are missing an R library (e.g. 'ggplot2'), that
is used by the VariantRecalibrator for building the plots. Start up an
interactive R session and type: "library('ggplot2')"
If you are missing the library, t
On Dec 4, 2011, at 5:39 PM, Ross wrote:
> Hi Russell,
>
> Just addressing AD/LDAP authentication - authentication is trivially and best
> (IMHO) left to an external (eg apache) proxy - save yourself a lot of effort
> - it's known to work well.
> Lock down the paste process so it only talks to y
Hi Giota,
The executable should be placed in a directory on $PATH. You can see what
directories those are with:
% echo $PATH
You can add to $PATH with:
% export PATH="/new/dir:$PATH"
--nate
On Dec 4, 2011, at 3:55 PM, Giota Kottara wrote:
> Hi,
>
> I've put the executable in the
Toqa,
Just to make sure I've understood your question: the problem is that a tool
that you're trying to wrap doesn't provide a way to specify a particular output
filename? Take a look at the from_work_dir attribute of a element.
Or are you asking how to define outputs in general? For that, s
Hi Everybody,
I am running a tool which is not defining the output file name as parameter.
the tool is running and I can see the output file in the history directory but
not in the history frame.
how could I solve the problem? I should see and download the output file from
the history but I see
Hi Graham,
I've created something similar myself. I've not put it on the toolshed
yet, as I have to test it further, but it seems to work as expected. See
code in attachment.
Best regards,
geert vandeweyer
University of Antwerp
On 12/05/2011 12:47 PM, graham etherington (TSL) wrote:
Hi,
I
Graham,
How are the output files being handled in split_var_length_barcodes_wrapper.py?
See http://wiki.g2.bx.psu.edu/Admin/Tools/Multiple%20Output%20Files for a
reference-- you're going to want to follow the bottom example there involving
$__new_file_path__ and the naming convention specified
Hi,
I'm developing a barcode splitter, which will split barcodes of variable
length and then put them into the history for downstream analysis
(currently FASTX barcode splitter only splits barcodes of the same length
and the user has to download the data through an html link). The number of
output
17 matches
Mail list logo