Sorry for my stupid ask but when I click on the link provide by the option
Export to File, I'm redirected on web page which shows me this message :
Still exporting history MyHistory;pleas check back soon. Link:
http://cistrome.org/ap/history/export_archive?id=###
I have no possibility to
On Wed, Feb 6, 2013 at 11:43 PM, alex.khassa...@csiro.au wrote:
Hi All,
Can anybody please add a few words on how can we use the “initial
implementation” which “ exists in the tasks framework”?
-Alex
To enable this, set use_tasked_jobs = True in your universe_wsgi.ini
file. The tools
Hi,
Some of our users are experiencing problems with (mainly) tabular data on
our local install of Galaxy (changeset 8368:0042b30216fc, Nov 06 2012).
I'm presuming it's some kind of meta-data problem.
The first strange behavior is that they are getting green history items,
but when the history
Unfortunately not, and with the migration of tools to the toolshed installation
mechanism I don't imagine this will be addressed (at least by the team) anytime
soon. If you wanted you could probably write a script that would reload a
specified tool in each of the separate web processes, or
On Feb 6, 2013, at 11:04 AM, Thyssen, Gregory - ARS wrote:
Hello
Everything seems to be working on my local Galaxy
I was talking to my IT guy who did the initial installation and he said that
virtualenv may not have been loaded.
When he did
% yum install python-virtualenv.noarch
He got a
This means the history is still being compressed; large histories can take a
long time to compressed, so you'll have to be patient and check back
periodically to see when the history is ready for download.
Best,
J.
On Feb 7, 2013, at 4:02 AM, julie dubois wrote:
Sorry for my stupid ask but
On Feb 7, 2013, at 6:04 AM, graham etherington (TSL) wrote:
Hi,
Some of our users are experiencing problems with (mainly) tabular data on
our local install of Galaxy (changeset 8368:0042b30216fc, Nov 06 2012).
I'm presuming it's some kind of meta-data problem.
The first strange behavior is
On Feb 7, 2013, at 9:07 AM, graham etherington (TSL) wrote:
Hi Nate,
It's a sporadic problem (in that it happens quite often, but not all the
time) and yes, the Galaxy jobs are dispatched to a cluster.
I'm not sure about the shared file system. Galaxy is defined as a user on
our cluster,
Hi Greg,
I believe that you do not need to actually copy the data to your
workstation (although you could ...) - instead, symbolically link it
from the external drive to your workstation so that it appears local.
Then upload from that path. The data library upload by path option
will follow
THANKS a lot
It works. It's a very big advance for us. Thanks again.
Julie
2013/2/7 Jeremy Goecks jeremy.goe...@emory.edu
This means the history is still being compressed; large histories can take
a long time to compressed, so you'll have to be patient and check back
periodically to see
Update:
When I run as the Galaxy user, Python does have the right temp directory:
tempfile.gettempdir()
'/scratch/galaxy'
So does that mean this upload job isn't running as galaxy, or is
skipping the job_environment_setup_file? Or could something else be
going on?
Any ideas, now I'm really
Could I modify /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py
to print out debug information like host, os.environ,
tempfile.gettempdir(), etc?
Would I be able to see its stdout from galaxy or the log, or is there
something special I need to do to retrieve the information?
On Thu,
I think I found the problem. The TMPDIR environment variable was set
to /tmp/5393732.1.f03.q for jobs galaxy was running. (I guess the
admins do this?)
I updated /usr/local/galaxy/job_environment_setup_file and also
/home/galaxy/.bashrc to set TMPDIR to /scratch/galaxy and it seems to
work now.
That's very unfortunate...I have a ton of tools and I guess now I have to create a package for them in a local toolshed to update them in a running galaxy server?In any case...The toolshed installation also does not work for me...I still have to restart galaxy, even after using the toolshed
Hi,
I tried the pcx and ps formats, but the browser just downloads these kinds
of files instead rendering them in the Galaxy window... It seems png and
pdf files can be rendered in the Galaxy windows. How can I make Galaxy
display other image formats like ps and pcx?
Thanks,
Luobin
On Thu, Feb 7, 2013 at 9:41 PM, Luobin Yang yangl...@isu.edu wrote:
Hi,
I tried the pcx and ps formats, but the browser just downloads these kinds
of files instead rendering them in the Galaxy window... It seems png and pdf
files can be rendered in the Galaxy windows. How can I make Galaxy
Luobin - one additional minor observation: In reality, Galaxy does not do
the displaying - it just sends stuff to the users' web browser for display.
So even when Galaxy knows what mimetype to attach to a specific image file,
the users' web browser response to that mimetype will always remain the
Thanks Peter. I see, parallelism works on a single large file by splitting it
and using multiple instances to process the bits in parallel.
In our case we use 'composite' data type, simply an array of input files and we
would like to process them in parallel, instead of having a 'foreach' loop
Hi Dannon
I'm presuming that wiping hidden files as you suggest would eliminate them
entirely so that there would be no history of them in the history, which
seems less than desirable if you want a record of how the analysis was
performed. It seems that it would be better if the steps in the
Ross Peter,
Thanks for clarifying on this!
Luobin
On Thu, Feb 7, 2013 at 4:36 PM, Ross ross.laza...@gmail.com wrote:
Luobin - one additional minor observation: In reality, Galaxy does not do
the displaying - it just sends stuff to the users' web browser for display.
So even when Galaxy
Hi,
Can someone point me to the documentation to set up /configure multiple
instances of Galaxy running on the same node please?
I think this is the best method of hiding tools based upon users email logon...
Thanks
Neil
___
Please keep
Neil,
If by 'multiple' you mean 'independent' galaxy instances, they must each
talk to independent backend databases, so if you're thinking of running eg
2 or more independent instances at CSIRO, each for specific tool sets and
sending each of your users to one or other of them based on some
Thanks Ross.
I did mean separate Galaxy instances like test and main with their own
independent backend databases.
How could I run say a test and a main from the same node?
I guess I'd need to modify the port number for each instance and then multiple
entries in the apache config file i.e. So
ok - sorry I misunderstood.
Yes, assuming you already have one Galaxy instance configured right,
cloning and editing the rewrite and authentication sections for the other
paste process should work and that looks reasonable to me FWIW - OTOH,
apache configuration is definitely one of the darker
24 matches
Mail list logo