Re: [galaxy-dev] ClustalW jobs aren't working

2014-06-10 Thread Hans-Rudolf Hotz



On 06/09/2014 07:33 PM, Malcolm Tobias wrote:

All,

I apologize for an off-topic question, by my update failed:

[galaxy@login002 galaxy-dist]$ /export/mercurial-1.7.5/bin/hg pull -u |
tee hg.pull.out

...

added 3716 changesets with 9929 changes to 2124 files (+46 heads)

...

merging lib/galaxy/app.py

merging lib/galaxy/app.py failed!

warning: conflicts during merge.

merging lib/galaxy/model/__init__.py

merging lib/galaxy/model/__init__.py failed!

warning: conflicts during merge.

merging lib/galaxy/model/mapping.py

merging lib/galaxy/model/mapping.py failed!

merging tool-data/shared/ucsc/ucsc_build_sites.txt and
tool-data/shared/ucsc/ucsc_build_sites.txt.sample to
tool-data/shared/ucsc/ucsc_build_sites.txt.sample

1528 files updated, 1 files merged, 1289 files removed, 3 files unresolved

use 'hg resolve' to retry unresolved file merges

resolve apparently can't fix this issue:

[galaxy@login002 galaxy-dist]$ /export/mercurial-1.7.5/bin/hg resolve --all

merging lib/galaxy/app.py

warning: conflicts during merge.

merging lib/galaxy/app.py failed!

merging lib/galaxy/model/__init__.py

warning: conflicts during merge.

merging lib/galaxy/model/__init__.py failed!

merging lib/galaxy/model/mapping.py

warning: conflicts during merge.

merging lib/galaxy/model/mapping.py failed!

What's the recommended way to fix this?



Depending on your local customization, it is not unusual to get 
unresolved files.

Open the files in an editor and complete the merge manually.


Regards, Hans-Rudolf






Thanks!

Malcolm

On Monday 09 June 2014 10:47:04 Malcolm Tobias wrote:

 

  Ross,

 

  Thanks for the response. My instance of Galaxy is indeed ancient:

 

  [galaxy@login002 galaxy-dist]$ /export/mercurial-1.7.5/bin/hg tip

  changeset: 10003:b4a373d86c51

  tag: tip

  parent: 10001:471484ff8be6

  user: greg

  date: Wed Jun 12 11:48:09 2013 -0400

  summary: Add targets to Repository Actions menu items.

 

  Previous attempts at updating have been pretty painful. I'll give it
a shot and let you know whether or not this fixes the problem.

 

  Cheers,

  Malcolm

 

  On Friday 06 June 2014 18:30:01 Ross wrote:

   Hi Malcolm,

   That error makes me think you might be running an outdated version
of galaxy code - the toolshed code has undergone extensive revision over
the last few months?

   I just tested that repository on a freshly updated galaxy-central
clone and it installed without drama, so I wonder what:

  

   hg tip

  

   shows?

  

   I just tested using :

   (vgalaxy)rlazarus@rlazarus-UX31A:~/galaxy$ hg tip

   changeset:   13756:84a00e4f7d06

   tag: tip

   user:Dannon Baker dannonba...@me.com

   date:Fri Jun 06 17:12:30 2014 -0400

   summary: Clarify language in DeleteIntermediateDataset PJA.

  

   If your clone is not up to date, I'd recommend completely removing
the failed installation (through the admin menu - check the box for
complete removal), shut down galaxy, backup your database, do the usual
hg pull -u dance and any necessary database upgrade steps then try a
clean install?

  

   Thanks for reporting this - if it persists on recent Galaxy code
we'll need to do some deeper investigation.

  

  

   On Fri, Jun 6, 2014 at 11:32 PM, Malcolm Tobias mtob...@wustl.edu
wrote:

  

   Ross,

  

   Thanks for the reply! Unfortunately I am the local Galaxy admin ;-)

  

   I had tried installing the clustalw tool from the toolshed, but
that failed with an error (more on that later). I disabled the local tool:

  

   [galaxy@login002 galaxy-dist]$ diff tool_conf.xml tool_conf.xml.bkup

   226a227,229

section name=Multiple Alignments id=clustal

tool file=rgenetics/rgClustalw.xml /

/section

  

   bounced galaxy in case that's necessary, then retried installing
from the toolshed. Shortly after clicking install, I get this message:

  

   Internal Server Error

   Galaxy was unable to sucessfully complete your request

  

   An error occurred.

   This may be an intermittent problem due to load or other
unpredictable factors, reloading the page may address the problem.

  

   The error has been logged to our team.

  

   The logs appear to be complaining about
'prior_installation_required' which I'm assuming means the
package_clustalw_2_1 dependency. I was able to install that, and I can
verify by looking at the local toolshed:

  

   [galaxy@login002 ~]$ ls
galaxy-toolshed/toolshed.g2.bx.psu.edu/repos/devteam/

   bowtie_wrappers package_clustalw_2_1 package_vcftools_0_1_11

   bwa_wrappers package_fastx_toolkit_0_0_13

  

   Again, I'll post the logs from when the install fails in case that
helps. Any suggestions are much appreciated.

  

   Cheers,

   Malcolm

  

  

   10.28.56.101 - - [06/Jun/2014:08:20:38 -0500] GET
/admin_toolshed/prepare_for_install?tool_shed_url=http://toolshed.g2.bx.psu.edu/repository_ids=0e5d027cf47ecae0changeset_revisions=7cc64024fe92
HTTP/1.1 500 -

[galaxy-dev] installing htseq-count from tool-shed

2014-06-10 Thread Karen Chait
Hi,

I am trying to install the tool htseq-count from the tool-shed. I had a
problem in the first installation so I uninstalled the tool and all its
dependencies (numpy, samtools 0.1.19 and pysam) from the 'manage installed
tool shed repositories) and installed it again but then I get installation
status 'Installed, missing tool dependencies' for htseq-count.

When running this tool I get the error 

Fatal error: The HTSeq python package is not properly installed, contact
Galaxy administrators

The path set in universe_wsgi.ini:

tool_dependency_dir = /home/galaxy/galaxy-dist/tool_deps

 

in the installation log
htseq/0.6.1/lparsons/htseq_count/d5edaf8dc974/INSTALLATION.log I get the
error:

Setup script for HTSeq: Failed to import 'numpy'.

Please install numpy and then try again to install HTSeq.

 

I reinstalled numpy but that did not help.

How should I continue?

Thanks,

Karen

 

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] installing htseq-count from tool-shed

2014-06-10 Thread Lance Parsons
Hi Karen,

Sorry your having trouble with installing the tool. It seems that the somehow, 
the numpy dependency didn't install properly or isn't being found during the 
HTSeq install. Is there any other info in the log to indicate an issue with 
installing Numpy? Also, what OS are you running Galaxy on?

Lance

On June 10, 2014 4:04:13 AM EDT, Karen Chait kch...@techunix.technion.ac.il 
wrote:
Hi,

I am trying to install the tool htseq-count from the tool-shed. I had a
problem in the first installation so I uninstalled the tool and all its
dependencies (numpy, samtools 0.1.19 and pysam) from the 'manage
installed
tool shed repositories) and installed it again but then I get
installation
status 'Installed, missing tool dependencies' for htseq-count.

When running this tool I get the error 

Fatal error: The HTSeq python package is not properly installed,
contact
Galaxy administrators

The path set in universe_wsgi.ini:

tool_dependency_dir = /home/galaxy/galaxy-dist/tool_deps

 

in the installation log
htseq/0.6.1/lparsons/htseq_count/d5edaf8dc974/INSTALLATION.log I get
the
error:

Setup script for HTSeq: Failed to import 'numpy'.

Please install numpy and then try again to install HTSeq.

 

I reinstalled numpy but that did not help.

How should I continue?

Thanks,

Karen

 





___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

-- 
Lance Parsons - Scientific Programmer
134 Carl C. Icahn Laboratory
Lewis-Sigler Institute for Integrative Genomics
Princeton University___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] SAM/BAM To Counts dataset generation errors

2014-06-10 Thread Israel Bravo
When trying SAM/BAM to Counts I get the following error message:   pysam
not installed; please install it

But a) pysam was not in this tool dependencies;
   b) pysam/0.7.7 is already installe


-- 

[image: Israel Bravo on about.me]

Israel Bravo
about.me/bravobih

  http://about.me/bravobih
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Missing navigation pane

2014-06-10 Thread Stephen E
I installed a local instance of galaxy approximately a month ago (build 
2014.04.14).
I was having an issue with the Data Manager to fetch reference genomes and 
after checking to see if there was a newer version of galaxy, I decided to 
update and see if that fixed my problem.
I ran hg pull and then hg update latest_2014.06.02. I tried to rerun galaxy 
but was instructed to run manage_db.sh so I did (sh manage_db.sh upgrade).
When I ran galaxy after all this, it started but when opened in a web browser, 
the navagation pane at the top is missing (i.e. Analyze Data, Workflows, User, 
etc). There is a blue bar but nothing is on it.
How do I get the missing navigation pane back? I can't do a lot of things 
without it (i.e. check help or change user settings). I can still get to the 
administrator section but only by appending /admin to the url. I need to know 
how to fix the new version or how to succesfully revert to a previous version 
without anything else breaking.
  ___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Missing navigation pane

2014-06-10 Thread Dannon Baker
Hey Stephen,

Can you look in the server logs to see if there are any errors being
reported?  Or if there are any javascript errors in the browser window?

You may also want to try clearing your browser cache.

-Dannon


On Tue, Jun 10, 2014 at 1:54 AM, Stephen E sedwards...@hotmail.com wrote:

 I installed a local instance of galaxy approximately a month ago (build
 2014.04.14).

 I was having an issue with the Data Manager to fetch reference genomes and
 after checking to see if there was a newer version of galaxy, I decided to
 update and see if that fixed my problem.

 I ran hg pull and then hg update latest_2014.06.02. I tried to rerun
 galaxy but was instructed to run manage_db.sh so I did (sh manage_db.sh
 upgrade).

 When I ran galaxy after all this, it started but when opened in a web
 browser, the navagation pane at the top is missing (i.e. Analyze Data,
 Workflows, User, etc). There is a blue bar but nothing is on it.

 How do I get the missing navigation pane back? I can't do a lot of things
 without it (i.e. check help or change user settings). I can still get to
 the administrator section but only by appending /admin to the url. I need
 to know how to fix the new version or how to succesfully revert to a
 previous version without anything else breaking.


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] inconsistent use of tempfile.mkstemp during upload causes problems

2014-06-10 Thread John Chilton
Hello All,

  Thanks for the well laid out e-mails and great discussion. I think
John-Paul's comment about the code growing up organically is probably
exactly right. (A link below has some details from Nate about this).

  So late last night I opened a sprawling pull request that cleaned up
a lot of stuff in upload.py and then realized it was a little bit
incorrect and wouldn't actually help any of you until you were able to
upgrade to the August 2014 release :) so I declined. I have reworked a
relatively small patch to fix the immediate problem of the consistency
tmpfile consistency as it relates to shutil.move. John-Paul Robinson,
can you apply it directly and see if it fixes your problems?

https://bitbucket.org/jmchilton/galaxy-central-fork-1/commits/dc706d78d9b21a1175199fd9201fe9781d48ffb5/raw

  If it does, the devteam will get this merged and I will continue
with the upload.py improvements that were inspired by this discussion
(see https://bitbucket.org/galaxy/galaxy-central/pull-request/408 for
more details).

-John

On Mon, Jun 9, 2014 at 2:50 PM, John-Paul Robinson j...@uab.edu wrote:
 We've considered the sudo solution, but it opens the window to other
 bugs giving galaxy the power to change ownership of other files in our
 shared user cluster environment.  We could isolate the power to a script
 but then we still need to monitor this code closely.  We'd prefer not to
 introduce that requirement.

 I didn't have the time to trace this down either. ;)  I just got tired
 of this issue and the inconsistent failures causing confusion in our
 community.

 I hope your insight into the logic drift over time is accurate and can
 be corrected.  The upload code looks like it's gone through a whole lot
 of organic growth. :/

 Looking forward to additional comments from the dev team.

 ~jpr


 On 06/09/2014 03:30 PM, Kandalaft, Iyad wrote:
 Hi JPR,

 I had the same questions while trying to figure out a fool-proof way to 
 allow users to import files into galaxy on our Cluster.  I couldn't exactly 
 figure out, nor did I have the time to really review, why the galaxy code 
 did these steps and why that shutil.move failed.  I opted to simply insert 
 code in upload.py to sudo chown/chmod the files as an easier hack to 
 this problem.  There are pros and cons to using the tmp var from the env, 
 and it will depend on your intentions/infrastructure.  I think the ideology 
 was that the Galaxy folder was supposed to be shared across all nodes in a 
 cluster, and they opted to use the TMP path within the galaxy folder.  
 Overtime, the code probably partially diverged from that notion, which 
 caused this dilemma.

 I believe that the best fix is to make the underlying code simply copy the 
 files into the environment-provided temp, which is configurable in galaxy's 
 universe_wsgi.ini, and assume ownership from the get-go.  This code of 
 copying and/or moving in discrete steps creates unnecessary complexity.

 Regards,
 Iyad

 -Original Message-
 From: galaxy-dev-boun...@lists.bx.psu.edu 
 [mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of John-Paul Robinson
 Sent: Monday, June 09, 2014 3:08 PM
 To: galaxy-dev@lists.bx.psu.edu
 Subject: [galaxy-dev] inconsistent use of tempfile.mkstemp during upload 
 causes problems

 There appears to be some inconsistent use of tempfile.mkstemp() within 
 upload.py that causes problems when users import data files to galaxy from a 
 cluster directory via the upload process and import/temp/dataset directories 
 are on different file systems.

 The issue manifests when Galaxy's job directory, dataset directory and 
 import directory are on different file systems (common for cluster
 environments) in conjunction with a configuration where users can copy their 
 data files directly to the import directory from which Galaxy selects data 
 sets to upload (as opposed to using an FTP gateway).

 While allowing users to copy files to an import directory rather than using 
 the FTP gateway may not be that common, we use this configuration locally to 
 help build a more seamless interface with our local collection of HPC 
 resources.  Users can be logged into their cluster account and move data 
 into galaxy with a file copy command rather than having to use FTP.

 This configuration has worked well in our environment as long as the correct 
 ownership configuration existed on the import directory and as long as the 
 import directory, job temporary directory, and galaxy data set directory 
 were all on the same file system.

 We now have our galaxy dataset directory on a different file system and are 
 seeing inconsistent behavior during the upload.py runs depending on if the 
 data is ordinary text, BAM files, or gzipped data.

 A subset of uploads will fail because of the way temporary files are created 
 by Galaxy to facilitate the import and any associated conversion processes 
 of different file types.

 During the import,

 1) Galaxy will copy the original file 

[galaxy-dev] Tracking tool execution time?

2014-06-10 Thread Jim McCusker
Is there a way to look at how long a galaxy tool takes to run?

Jim
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Toolshed update list?

2014-06-10 Thread Eric Rasche
Is there any way to access a list of repository updates in a toolshed,
perhaps as an RSS feed? Alternatively, is this data available internally
and it just requires writing a route to handle that query?

As of now when I log into my toolshed I'm faced with a page of
meaningless hashes and version numbers which I cannot mentally track for
updates across multiple days/projects.

Cheers,
Eric

-- 
Eric Rasche
Programmer II
Center for Phage Technology
Texas AM University
College Station, TX 77843
404-692-2048 tel:4046922048
e...@tamu.edu mailto:e...@tamu.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Galaxy updated botched?

2014-06-10 Thread Kandalaft, Iyad
Hi Everyone,

This is follow-up information/questions to the issue I ran into with the galaxy 
June 2nd, 2014 update.  I hope to receive feedback on how to proceed.

Background:

-  Running Galaxy (DB Schema 118) with a MySQL 5.5 back-end

-  When updating galaxy to the june 2nd release, the v120 DB schema has 
referential integrity constraints, which produced errors during the upgrade.

-  Completed two galaxy updates in the past 4 months without 
encountering this before (schema changes included)

Discussion:
In the past, referential integrity in the DB schema was never an issue.  I 
checked backups and the current database to find that the database tables are 
using the MyISAM engine.  MyISAM =  no referential integrity support, no 
transactions.
I reviewed galaxy's SQLAlchemy templates and determined that 
mysql_engine='InnoDB' isn't set on tables.  This explains why all tables were 
created with the MyISAM engine.  If the mysql_engine is not innodb, SQL Alchemy 
is supposed to drop any referential integrity constraints defined in the 
schema.  What I don't understand is why SQL Alchemy is no longer ignoring the 
referential integrity constraints.

Going forward, can anyone propose how I can salvage the database or continue 
ignoring referential integrity for now?
Assuming that my limited understanding of SQLAlchemy holds water, I was looking 
at fixing the galaxy code base but I need some clarification on the DB schema 
versioning.  Do I edit schema v1 and add the appropriate table args to make 
every table an innodb engine table or do I add a new schema and modify all 
tables to use the innodb engine?  Alternatively, I can use DDL events
def after_create(target, connection, **kw):
connection.execute(ALTER TABLE %s ENGINE=InnoDB;
   (target.name, target.name))

Thank you for your help.

Regards,
Iyad Kandalaft

Bioinformatics Application Developer
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
KW Neatby Bldg | éd. KW Neatby
960 Carling Ave| 960, avenue Carling
Ottawa, ON | Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel: 
iyad.kandal...@agr.gc.camailto:iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613- 759-1228
Facsimile | Télécopieur 613-759-1701
Government of Canada | Gouvernement du Canada

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] troubleshooting Galaxy with LSF

2014-06-10 Thread I Kozin
Hello,
This is largely a repost from the biostar forum following the suggestion
there to post here.

I'm doing my first steps in setting up a Galaxy server with an LSF job
scheduler. Recently LSF started supporting DRMAA again so I decided to give
it a go.

I have two setups. The one that works is a stand along server (OpenSuse
12.1, python 2.7.2, LSF 9.1.2). By works I mean that when I login into
Galaxy using a browser and upload a file, a job gets submitted and run and
everything seems fine.

The second setup does not work (RH 6.4, python 2.6.6, LSF 9.1.2). It's a
server running Galaxy which is meant to submit jobs to an LSF cluster. When
I similarly pick and download a file I get

Job 72266 is submitted to queue short.
./run.sh: line 79: 99087 Segmentation fault  python ./scripts/paster.py
serve universe_wsgi.ini $@

For the moment, I'm not bothered with the full server setup, I'm just
testing whether Galaxy works with LSF and therefore run ./run.sh as a user.

The job configuration job_conf.xml is identical in both cases:

?xml version=1.0?
job_conf
plugins
plugin id=lsf type=runner
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner
param
id=drmaa_library_path/opt/gridware/lsf/9.1/linux2.6-glibc2.3-x86_64/lib/libdrmaa.so/param
/plugin
/plugins
handlers
handler id=main/
/handlers
destinations default=lsf_default
destination id=lsf_default runner=lsf
param id=nativeSpecification-W 24:00/param
/destination
/destinations
/job_conf

run.sh is only changed to allow remote access.

Most recently I tried replacing python with 2.7.5 to no avail. Still the
same kind of error. I also updated Galaxy.

Any hints would be much appreciated. Thank you
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] troubleshooting Galaxy with LSF

2014-06-10 Thread Kandalaft, Iyad
This is just a guess, which may help you troubleshoot.
It could be a that python is reaching a stack limit: run ulimit -s  and set it 
to a higher value if required
I’m completely guessing here but is it possible that the DRMAA is missing a 
linked library on the redhat system – check with ldd?

Regards,
Iyad Kandalaft

Iyad Kandalaft
Microbial Biodiversity Bioinformatics
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
960 Carling Ave.| 960 Ave. Carling
Ottawa, ON| Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel  iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613-759-1228
Facsimile | Télécopieur 613-759-1701
Teletypewriter | Téléimprimeur 613-773-2600
Government of Canada | Gouvernement du Canada




From: galaxy-dev-boun...@lists.bx.psu.edu 
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of I Kozin
Sent: Tuesday, June 10, 2014 12:42 PM
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] troubleshooting Galaxy with LSF

Hello,
This is largely a repost from the biostar forum following the suggestion there 
to post here.

I'm doing my first steps in setting up a Galaxy server with an LSF job 
scheduler. Recently LSF started supporting DRMAA again so I decided to give it 
a go.

I have two setups. The one that works is a stand along server (OpenSuse 12.1, 
python 2.7.2, LSF 9.1.2). By works I mean that when I login into Galaxy using 
a browser and upload a file, a job gets submitted and run and everything seems 
fine.

The second setup does not work (RH 6.4, python 2.6.6, LSF 9.1.2). It's a server 
running Galaxy which is meant to submit jobs to an LSF cluster. When I 
similarly pick and download a file I get

Job 72266 is submitted to queue short.
./run.sh: line 79: 99087 Segmentation fault  python ./scripts/paster.py 
serve universe_wsgi.ini $@

For the moment, I'm not bothered with the full server setup, I'm just testing 
whether Galaxy works with LSF and therefore run ./run.sh as a user.

The job configuration job_conf.xml is identical in both cases:

?xml version=1.0?
job_conf
plugins
plugin id=lsf type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner
param 
id=drmaa_library_path/opt/gridware/lsf/9.1/linux2.6-glibc2.3-x86_64/lib/libdrmaa.so/param
/plugin
/plugins
handlers
handler id=main/
/handlers
destinations default=lsf_default
destination id=lsf_default runner=lsf
param id=nativeSpecification-W 24:00/param
/destination
/destinations
/job_conf

run.sh is only changed to allow remote access.

Most recently I tried replacing python with 2.7.5 to no avail. Still the same 
kind of error. I also updated Galaxy.

Any hints would be much appreciated. Thank you
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] inconsistent use of tempfile.mkstemp during upload causes problems

2014-06-10 Thread John-Paul Robinson
Thanks for proposing a patch.

We're looking at trying it out on our code base but are at least a few
revisions behind and may need to do a little back porting (or catching up).

Let me make sure I understand the intention of the patch, though.


From the code changes, I trust that the intention is that all temporary
files should be created in the Galaxy dataset directory.  That is the
temp directory path will be the explicitly derived from the output_path
that is an argument to upload.py.

This is fine.

I'm somewhat partial to the environment influencing the TMP dir via an
unadorned mkstemp()  but I can appreciate having a definitive
output_path specified on the call to upload.py.

One issue that will affect us slightly (but probably not others) is that
we currently have our dataset path on a glusterfs volume which doesn't
support ACLs.  This means we won't have a way of overriding umask
settings that may be in place for the user.  If a user with umask 022
writes a file to their import dir, the permissions will at best be
read-only for the galaxy processes group owner.  This may prevent with
the os.rename as well.  With a default ACL we could cause the umask to
be ignored in favor of a permissive ACL.  This is really a local deploy
issue but it's worth knowing about.

A second question, it looks like the patch only updates convert_newlines
and convert_newlines_sep2tabs.   The handle_compressed_file and sep2tabs
methods in sniff also have bare mkstemp() calls. Do they need
modification as well?

~jpr

On 06/10/2014 10:49 AM, John Chilton wrote:
 Hello All,

   Thanks for the well laid out e-mails and great discussion. I think
 John-Paul's comment about the code growing up organically is probably
 exactly right. (A link below has some details from Nate about this).

   So late last night I opened a sprawling pull request that cleaned up
 a lot of stuff in upload.py and then realized it was a little bit
 incorrect and wouldn't actually help any of you until you were able to
 upgrade to the August 2014 release :) so I declined. I have reworked a
 relatively small patch to fix the immediate problem of the consistency
 tmpfile consistency as it relates to shutil.move. John-Paul Robinson,
 can you apply it directly and see if it fixes your problems?

 https://bitbucket.org/jmchilton/galaxy-central-fork-1/commits/dc706d78d9b21a1175199fd9201fe9781d48ffb5/raw

   If it does, the devteam will get this merged and I will continue
 with the upload.py improvements that were inspired by this discussion
 (see https://bitbucket.org/galaxy/galaxy-central/pull-request/408 for
 more details).

 -John

 On Mon, Jun 9, 2014 at 2:50 PM, John-Paul Robinson j...@uab.edu wrote:
 We've considered the sudo solution, but it opens the window to other
 bugs giving galaxy the power to change ownership of other files in our
 shared user cluster environment.  We could isolate the power to a script
 but then we still need to monitor this code closely.  We'd prefer not to
 introduce that requirement.

 I didn't have the time to trace this down either. ;)  I just got tired
 of this issue and the inconsistent failures causing confusion in our
 community.

 I hope your insight into the logic drift over time is accurate and can
 be corrected.  The upload code looks like it's gone through a whole lot
 of organic growth. :/

 Looking forward to additional comments from the dev team.

 ~jpr


 On 06/09/2014 03:30 PM, Kandalaft, Iyad wrote:
 Hi JPR,

 I had the same questions while trying to figure out a fool-proof way to 
 allow users to import files into galaxy on our Cluster.  I couldn't exactly 
 figure out, nor did I have the time to really review, why the galaxy code 
 did these steps and why that shutil.move failed.  I opted to simply insert 
 code in upload.py to sudo chown/chmod the files as an easier hack to 
 this problem.  There are pros and cons to using the tmp var from the env, 
 and it will depend on your intentions/infrastructure.  I think the ideology 
 was that the Galaxy folder was supposed to be shared across all nodes in a 
 cluster, and they opted to use the TMP path within the galaxy folder.  
 Overtime, the code probably partially diverged from that notion, which 
 caused this dilemma.

 I believe that the best fix is to make the underlying code simply copy the 
 files into the environment-provided temp, which is configurable in galaxy's 
 universe_wsgi.ini, and assume ownership from the get-go.  This code of 
 copying and/or moving in discrete steps creates unnecessary complexity.

 Regards,
 Iyad

 -Original Message-
 From: galaxy-dev-boun...@lists.bx.psu.edu 
 [mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of John-Paul Robinson
 Sent: Monday, June 09, 2014 3:08 PM
 To: galaxy-dev@lists.bx.psu.edu
 Subject: [galaxy-dev] inconsistent use of tempfile.mkstemp during upload 
 causes problems

 There appears to be some inconsistent use of tempfile.mkstemp() within 
 upload.py that causes problems when users 

Re: [galaxy-dev] inconsistent use of tempfile.mkstemp during upload causes problems

2014-06-10 Thread John Chilton
On Tue, Jun 10, 2014 at 1:50 PM, John-Paul Robinson j...@uab.edu wrote:
 Thanks for proposing a patch.

 We're looking at trying it out on our code base but are at least a few
 revisions behind and may need to do a little back porting (or catching up).

I think it has been a while since this stuff has been significantly
modified - I would guess the patch applies cleanly for last half dozen
releases.


 Let me make sure I understand the intention of the patch, though.


 From the code changes, I trust that the intention is that all temporary
 files should be created in the Galaxy dataset directory.  That is the
 temp directory path will be the explicitly derived from the output_path
 that is an argument to upload.py.

The attempt is for all temp files that will just be moved into
output_path anyway be created and written out in the same directory as
output_path. It is difficult to imagine scenarios where this is not on
the same file system as output_path - this means shutil.move should be
maximally performant and minimally error prone (agreed?). In general I
agree that tools shouldn't really be modifying the TMPDIR - the system
configuration should be used and the upload process may still result
in some /tmp files - for instance uploading bam files causes some
files for samtools stdout and stderr to be written to /tmp. This patch
allow them to - because these files will never be moved to the
output_path directory.


 This is fine.

 I'm somewhat partial to the environment influencing the TMP dir via an
 unadorned mkstemp()  but I can appreciate having a definitive
 output_path specified on the call to upload.py.

Responded to this above.


 One issue that will affect us slightly (but probably not others) is that
 we currently have our dataset path on a glusterfs volume which doesn't
 support ACLs.  This means we won't have a way of overriding umask
 settings that may be in place for the user.  If a user with umask 022
 writes a file to their import dir, the permissions will at best be
 read-only for the galaxy processes group owner.  This may prevent with
 the os.rename as well.  With a default ACL we could cause the umask to
 be ignored in favor of a permissive ACL.  This is really a local deploy
 issue but it's worth knowing about.

So this expanded version of this pull request that will hopefully be
included with the next Galaxy release in July should centralize some
of the handling of this and have fewer paths through the code meaning
it will be easier to tailor to your local deployment. At that time -
if there is more we can do - adding different options, extension
points, etc... to further ease your deployment let me know.


 A second question, it looks like the patch only updates convert_newlines
 and convert_newlines_sep2tabs.   The handle_compressed_file and sep2tabs
 methods in sniff also have bare mkstemp() calls. Do they need
 modification as well?

Okay - so sep2tabs is used nowhere in Galaxy as far as I can tell. I
would like to just delete it at some point. handle_compressed_file is
never used during upload.py as far as I can tell - it is for formal
data source tools - which should all behave more like normal jobs and
normal Galaxy tools - so this is less likely to be a problem. There
are some assertions in that last sentence I could be wrong about - and
if that method proves problematic let me know and we can deal with it.

Thanks again for the e-mail and for pushing Galaxy forward on this,

-John


 ~jpr

 On 06/10/2014 10:49 AM, John Chilton wrote:
 Hello All,

   Thanks for the well laid out e-mails and great discussion. I think
 John-Paul's comment about the code growing up organically is probably
 exactly right. (A link below has some details from Nate about this).

   So late last night I opened a sprawling pull request that cleaned up
 a lot of stuff in upload.py and then realized it was a little bit
 incorrect and wouldn't actually help any of you until you were able to
 upgrade to the August 2014 release :) so I declined. I have reworked a
 relatively small patch to fix the immediate problem of the consistency
 tmpfile consistency as it relates to shutil.move. John-Paul Robinson,
 can you apply it directly and see if it fixes your problems?

 https://bitbucket.org/jmchilton/galaxy-central-fork-1/commits/dc706d78d9b21a1175199fd9201fe9781d48ffb5/raw

   If it does, the devteam will get this merged and I will continue
 with the upload.py improvements that were inspired by this discussion
 (see https://bitbucket.org/galaxy/galaxy-central/pull-request/408 for
 more details).

 -John

 On Mon, Jun 9, 2014 at 2:50 PM, John-Paul Robinson j...@uab.edu wrote:
 We've considered the sudo solution, but it opens the window to other
 bugs giving galaxy the power to change ownership of other files in our
 shared user cluster environment.  We could isolate the power to a script
 but then we still need to monitor this code closely.  We'd prefer not to
 introduce that requirement.

 I didn't have the 

[galaxy-dev] upload problems

2014-06-10 Thread Shrum, Donald C
Hi all,

I'm working with a problem with user uploaded files.  After digging through the 
logs a bit and running the commands on at a time manually I think I've narrowed 
it to a permissions problem.  This was confirmed by just running galaxy as root 
and the problem went away ;)

-bash-4.1$ PYTHONPATH=/panfs/storage.local/software/galaxy-dist/lib/
-bash-4.1$ python 
/panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py 
/panfs/storage.local/software/galaxy-dist//panfs/storage.local/scratch/galaxy-data/tmp/tmpSuHquR
 /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf 
6:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/dataset_6_files:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/galaxy_dataset_6.dat
Traceback (most recent call last):
  File /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, 
line 394, in module
__main__()
  File /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, 
line 369, in __main__
registry.load_datatypes( root_dir=sys.argv[1], config=sys.argv[2] )
  File 
/panfs/storage.local/software/galaxy-dist/lib/galaxy/datatypes/registry.py, 
line 97, in load_datatypes
tree = galaxy.util.parse_xml( config )
  File /panfs/storage.local/software/galaxy-dist/lib/galaxy/util/__init__.py, 
line 154, in parse_xml
tree = ElementTree.parse(fname)
  File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, line 
859, in parse
  File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, line 
576, in parse
IOError: [Errno 13] Permission denied: 
'/panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf'


-bash-4.1$ ls -l /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf
-rw--- 1 dcshrum dcshrum 317 Jun 10 16:30 
/panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf


It does not look like galaxy is using sudo to run the script.  Suggestions to 
work this out?



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] upload problems

2014-06-10 Thread John Chilton
You didn't include this context, but I am guessing you are attempting
to run jobs as the real user? If not, ignore the rest of the e-mail.

I would generally not recommend running the uploads as real user -
it is a complex process but should go relatively quick.

Understand that may not be possible though. So that file is the
integrated datatypes configuration file I believe. There is just one
global copy that gets created with Galaxy boots up - so it cannot be
chown-ed on a per job basis. The thing is that Galaxy should be
modifying it to be world readable
(https://bitbucket.org/galaxy/galaxy-central/annotate/e2b761a9b1d6d41db71b28df8b62862c7c300eba/lib/galaxy/datatypes/registry.py?at=default#cl-811)
- something is going wrong if it is not. Can you verify the file is
644?

That leads me to believe that users don't have read access to the
global temp directory. Can you check if users can read
/panfs/storage.local/scratch/galaxy-data/tmp/? I think they will need
to to use some tools including uploads?

If you cannot make this directory accessible to users - can you change
Galaxy's new_file_path so that it is some directory globally readable?

-John

On Tue, Jun 10, 2014 at 4:10 PM, Shrum, Donald C dcsh...@admin.fsu.edu wrote:
 Hi all,

 I'm working with a problem with user uploaded files.  After digging through 
 the logs a bit and running the commands on at a time manually I think I've 
 narrowed it to a permissions problem.  This was confirmed by just running 
 galaxy as root and the problem went away ;)

 -bash-4.1$ PYTHONPATH=/panfs/storage.local/software/galaxy-dist/lib/
 -bash-4.1$ python 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py 
 /panfs/storage.local/software/galaxy-dist//panfs/storage.local/scratch/galaxy-data/tmp/tmpSuHquR
  /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf 
 6:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/dataset_6_files:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/galaxy_dataset_6.dat
 Traceback (most recent call last):
   File 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, line 
 394, in module
 __main__()
   File 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, line 
 369, in __main__
 registry.load_datatypes( root_dir=sys.argv[1], config=sys.argv[2] )
   File 
 /panfs/storage.local/software/galaxy-dist/lib/galaxy/datatypes/registry.py, 
 line 97, in load_datatypes
 tree = galaxy.util.parse_xml( config )
   File 
 /panfs/storage.local/software/galaxy-dist/lib/galaxy/util/__init__.py, line 
 154, in parse_xml
 tree = ElementTree.parse(fname)
   File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, line 
 859, in parse
   File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, line 
 576, in parse
 IOError: [Errno 13] Permission denied: 
 '/panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf'


 -bash-4.1$ ls -l /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf
 -rw--- 1 dcshrum dcshrum 317 Jun 10 16:30 
 /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf


 It does not look like galaxy is using sudo to run the script.  Suggestions to 
 work this out?



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] upload problems

2014-06-10 Thread Shrum, Donald C
Hi Jon and thanks for the reply.I am attempting to run jobs as the real 
user as jobs will go to our HPC cluster.  This will be an enterprise server.  

/panfs/storage.local/scratch/galaxy-data/ is world writable-
drwxrwxrwx  4 galaxy  galaxy   4096 May  7 09:08 galaxy-data 

as is tmp
-bash-4.1$ ls -l /panfs/storage.local/scratch/galaxy-data/
total 160
drwxrwxrwx 2 galaxy  galaxy  20480 Jun 10 17:00 tmp

I got a little lost on the integrated datatypes configuration file... is that 
an XML file?  I'm not sure which file I'm looking for and I'm new to galaxy.

--Donny

-Original Message-
From: John Chilton [mailto:jmchil...@gmail.com] 
Sent: Tuesday, June 10, 2014 5:33 PM
To: Shrum, Donald C
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] upload problems

You didn't include this context, but I am guessing you are attempting to run 
jobs as the real user? If not, ignore the rest of the e-mail.

I would generally not recommend running the uploads as real user - it is a 
complex process but should go relatively quick.

Understand that may not be possible though. So that file is the integrated 
datatypes configuration file I believe. There is just one global copy that gets 
created with Galaxy boots up - so it cannot be chown-ed on a per job basis. The 
thing is that Galaxy should be modifying it to be world readable
(https://bitbucket.org/galaxy/galaxy-central/annotate/e2b761a9b1d6d41db71b28df8b62862c7c300eba/lib/galaxy/datatypes/registry.py?at=default#cl-811)
- something is going wrong if it is not. Can you verify the file is 644?

That leads me to believe that users don't have read access to the global temp 
directory. Can you check if users can read 
/panfs/storage.local/scratch/galaxy-data/tmp/? I think they will need to to use 
some tools including uploads?

If you cannot make this directory accessible to users - can you change Galaxy's 
new_file_path so that it is some directory globally readable?

-John

On Tue, Jun 10, 2014 at 4:10 PM, Shrum, Donald C dcsh...@admin.fsu.edu wrote:
 Hi all,

 I'm working with a problem with user uploaded files.  After digging 
 through the logs a bit and running the commands on at a time manually 
 I think I've narrowed it to a permissions problem.  This was confirmed 
 by just running galaxy as root and the problem went away ;)

 -bash-4.1$ PYTHONPATH=/panfs/storage.local/software/galaxy-dist/lib/
 -bash-4.1$ python 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py 
 /panfs/storage.local/software/galaxy-dist//panfs/storage.local/scratch/galaxy-data/tmp/tmpSuHquR
  /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf 
 6:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/dataset_6_files:/panfs/storage.local/software/galaxy-dist/database/job_working_directory/000/6/galaxy_dataset_6.dat
 Traceback (most recent call last):
   File 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, line 
 394, in module
 __main__()
   File 
 /panfs/storage.local/software/galaxy-dist/tools/data_source/upload.py, line 
 369, in __main__
 registry.load_datatypes( root_dir=sys.argv[1], config=sys.argv[2] )
   File 
 /panfs/storage.local/software/galaxy-dist/lib/galaxy/datatypes/registry.py, 
 line 97, in load_datatypes
 tree = galaxy.util.parse_xml( config )
   File 
 /panfs/storage.local/software/galaxy-dist/lib/galaxy/util/__init__.py, line 
 154, in parse_xml
 tree = ElementTree.parse(fname)
   File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, line 
 859, in parse
   File build/bdist.linux-x86_64-ucs4/egg/elementtree/ElementTree.py, 
 line 576, in parse
 IOError: [Errno 13] Permission denied: 
 '/panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf'


 -bash-4.1$ ls -l 
 /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf
 -rw--- 1 dcshrum dcshrum 317 Jun 10 16:30 
 /panfs/storage.local/scratch/galaxy-data/tmp/tmpYGRnAf


 It does not look like galaxy is using sudo to run the script.  Suggestions to 
 work this out?



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this and other 
 Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Toolshed upload error message

2014-06-10 Thread Vipin TS
Hi Greg,

When I trying to upload a next release version o my
https://toolshed.g2.bx.psu.edu/repos/vipints/fml_gff3togtf converter
program to Community Toolshed, I am getting an internal error message.

I am uploading a tar.gz file to the page https://toolshed.g2.bx.psu.edu/
but this fails.

Could you please pass the error message or let me how I can add new files
to the repository and delete the old version.

Thanks,

Vipin | Rätsch Lab
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Non-admin tool install/import

2014-06-10 Thread Ross
On Tue, Jun 10, 2014 at 7:37 AM, Karthik Gururaj gururaj.kart...@gmail.com
wrote:

 Thanks - will take a look at the Tool Shed and see if there is anything
 there we can use in safe manner. Else, the old fashioned way of asking
 users to test out on their tools on a standalone Galaxy system and then
 requesting the administrator to pull in the relevant XML files.


Sorry this is a bit confusing...
Just to be quite clear: The Tool Factory is available from the main Tool
Shed, but Tool Factory =/= Tool Shed !!

The Tool Factory is just another Galaxy tool administrators can use to run
scripts interactively in Galaxy.
It installs automatically from the main Tool Shed and optionally generates
new Galaxy tools from working scripts.
New generated tools are in a tgz archive ready to be uploaded into Tool
Shed repositories.

The Tool Shed is a specialised web server that supports version control and
management of Galaxy tool source code and automated installation into
Galaxy instances.
You could run a local Tool Shed and yes, your admins could use it to
install properly configured user provided tools.

The Tool Factory allows your scripting-capable users of creating new Galaxy
tools from scripts if they run it as administrators of their own
laptop/development instances, or your administrators could use it to run
scripts directly on your private instance and optionally generate new safe
tools (by uploading the archives to your local tool shed then installing
those new tools into your local Galaxy) for ordinary users to use in their
workflows.




 Thanks,
 Karthik


 On Sun, Jun 8, 2014 at 6:25 PM, Ross ross.laza...@gmail.com wrote:

 Hi, Karthik and John.

 Some details on the tool factory for anyone interested.

 Executive summary: it may be helpful in this context but only for trusted
 administrators.

 TL;DR:

 Firstly, it will refuse to run for anyone other than a local Galaxy
 administrator. This is because it exposes unrestricted scripting so should
 only be installed if you can trust your administrative users not to run cd
 /; rm -rf *. I'd advise installing ONLY on your own private instance and
 NEVER on a public Galaxy.

 Secondly, it has two modes of operation - script running and tool
 generation.

 When executed without the option to generate a tool archive, it will run
 a pasted (perl, python, R, bash) script creating an output in the history.
 This history output is re-doable in the usual Galaxy way including allowing
 the script to be edited and rerun, so it's possible to (eg) get a script
 working interactively - galaxy as an IDE anyone ? :)

 Once a script runs on some test data, the tool factory will optionally
 generate a complete tool shed compatible gzip which can be uploaded to any
 tool shed as a new or updated repository. The generated tool includes the
 supplied test data as a proper Galaxy functional test. Once a tool is in a
 toolshed, it is just another Galaxy tool, ready to be installed to any
 Galaxy like any other tool - but will require restarting of multiple web
 processes as John mentions.

 If the script is safe, the tool is safe - there are no specific security
 risks for tool factory generated tools other than the script itself.

 Finally, currently it takes only one input and generates one output which
 is a substantial restriction - but of course the generated tool source code
 is easy to edit if you need more complex I/O. It has a really neat option
 to create a simple but useful HTML display with links and thumbnails to
 arbitrary (eg) pdf or other output files from a script - the tool form
 includes examples in all 4 scripting languages ready to cut and paste,
 including one which generates 50 random images and presents them in a grid
 as an HTML page for the user.

 On Mon, Jun 9, 2014 at 10:32 AM, John Chilton jmchil...@gmail.com
 wrote:

 Galaxy doesn't really support this use case and it will be major
 effort to get it work this way I suspect. Pieces to look at include:

 The Galaxy Tool Factory (it has the ability to create reusable tools
 from scripts):

 http://www.ncbi.nlm.nih.gov/pubmed/23024011

 You may be able to modify it in such a way that each tool is tagged
 with who created and then use ToolBox filters to limit added tools to
 a given user:

 https://wiki.galaxyproject.org/UserDefinedToolboxFilters

 I think the latest version of Galaxy has improved support for adding
 tools without requiring restarts (using message queues). I don't know
 if this will automatically work with the tool factory or not.

 I suspect fighting Galaxy at every step on this will frustrate you and
 the users - and you are exposing all of your users data to every user
 you give this privilege to. Is this a shared cluster or is dedicated
 to Galaxy? If it is shared - it might be better for advanced users to
 just get importing and exporting data to user directories really well.
 In my previous position at MSI we created a set of tools that allowed
 Galaxy to SCP files as the user to 

Re: [galaxy-dev] Non-admin tool install/import

2014-06-10 Thread Karthik Gururaj
Sorry, I meant Tool Factory when I wrote Tool Shed in my previous email.


On Tue, Jun 10, 2014 at 5:25 PM, Ross ross.laza...@gmail.com wrote:



 On Tue, Jun 10, 2014 at 7:37 AM, Karthik Gururaj 
 gururaj.kart...@gmail.com wrote:

 Thanks - will take a look at the Tool Shed and see if there is anything
 there we can use in safe manner. Else, the old fashioned way of asking
 users to test out on their tools on a standalone Galaxy system and then
 requesting the administrator to pull in the relevant XML files.


 Sorry this is a bit confusing...
 Just to be quite clear: The Tool Factory is available from the main Tool
 Shed, but Tool Factory =/= Tool Shed !!

 The Tool Factory is just another Galaxy tool administrators can use to run
 scripts interactively in Galaxy.
 It installs automatically from the main Tool Shed and optionally generates
 new Galaxy tools from working scripts.
 New generated tools are in a tgz archive ready to be uploaded into Tool
 Shed repositories.

 The Tool Shed is a specialised web server that supports version control
 and management of Galaxy tool source code and automated installation into
 Galaxy instances.
 You could run a local Tool Shed and yes, your admins could use it to
 install properly configured user provided tools.

 The Tool Factory allows your scripting-capable users of creating new
 Galaxy tools from scripts if they run it as administrators of their own
 laptop/development instances, or your administrators could use it to run
 scripts directly on your private instance and optionally generate new safe
 tools (by uploading the archives to your local tool shed then installing
 those new tools into your local Galaxy) for ordinary users to use in their
 workflows.




  Thanks,
 Karthik


 On Sun, Jun 8, 2014 at 6:25 PM, Ross ross.laza...@gmail.com wrote:

 Hi, Karthik and John.

 Some details on the tool factory for anyone interested.

 Executive summary: it may be helpful in this context but only for
 trusted administrators.

 TL;DR:

 Firstly, it will refuse to run for anyone other than a local Galaxy
 administrator. This is because it exposes unrestricted scripting so should
 only be installed if you can trust your administrative users not to run cd
 /; rm -rf *. I'd advise installing ONLY on your own private instance and
 NEVER on a public Galaxy.

 Secondly, it has two modes of operation - script running and tool
 generation.

 When executed without the option to generate a tool archive, it will run
 a pasted (perl, python, R, bash) script creating an output in the history.
 This history output is re-doable in the usual Galaxy way including allowing
 the script to be edited and rerun, so it's possible to (eg) get a script
 working interactively - galaxy as an IDE anyone ? :)

 Once a script runs on some test data, the tool factory will optionally
 generate a complete tool shed compatible gzip which can be uploaded to any
 tool shed as a new or updated repository. The generated tool includes the
 supplied test data as a proper Galaxy functional test. Once a tool is in a
 toolshed, it is just another Galaxy tool, ready to be installed to any
 Galaxy like any other tool - but will require restarting of multiple web
 processes as John mentions.

 If the script is safe, the tool is safe - there are no specific security
 risks for tool factory generated tools other than the script itself.

 Finally, currently it takes only one input and generates one output
 which is a substantial restriction - but of course the generated tool
 source code is easy to edit if you need more complex I/O. It has a really
 neat option to create a simple but useful HTML display with links and
 thumbnails to arbitrary (eg) pdf or other output files from a script - the
 tool form includes examples in all 4 scripting languages ready to cut and
 paste, including one which generates 50 random images and presents them in
 a grid as an HTML page for the user.

 On Mon, Jun 9, 2014 at 10:32 AM, John Chilton jmchil...@gmail.com
 wrote:

 Galaxy doesn't really support this use case and it will be major
 effort to get it work this way I suspect. Pieces to look at include:

 The Galaxy Tool Factory (it has the ability to create reusable tools
 from scripts):

 http://www.ncbi.nlm.nih.gov/pubmed/23024011

 You may be able to modify it in such a way that each tool is tagged
 with who created and then use ToolBox filters to limit added tools to
 a given user:

 https://wiki.galaxyproject.org/UserDefinedToolboxFilters

 I think the latest version of Galaxy has improved support for adding
 tools without requiring restarts (using message queues). I don't know
 if this will automatically work with the tool factory or not.

 I suspect fighting Galaxy at every step on this will frustrate you and
 the users - and you are exposing all of your users data to every user
 you give this privilege to. Is this a shared cluster or is dedicated
 to Galaxy? If it is shared - it might be better for advanced users to
 just