Re: [galaxy-dev] Creating a galaxy tool in R - You must not use 8-bit bytestrings
Hi Dan I added this to my Rscript_wrapper.sh script and all is well. I am happy to hear that and don't worry about answering my own question. There are always other people on the list which will learn from the comments and/or find the solution later in the mail archive ;) regards, Hans ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] 503 Error when adding a tool from local toolshed
Dear List, I'm trying to check my toolshed tools by installing first on a local toolshed, but running into an error every time I try to add my tools. In order to rule out possible idiosyncracies of my system I've reproduced the error on an Ubuntu VM ... the system was not completely clean ... a few packages required by galaxy and some dependencies of my tools have been installed, but I performed the following steps after creating a fresh galaxy user. # Create new user galaxy # Create galaxy_env via virtualenv.py # Add entry to .bashrc to ensure python in galaxy_env is the default python for the galaxy user # Fresh checkout of galaxy-central # sh run.sh --reload # Add admin user to galaxy by configuring universe_wsgi.ini # Configure tool_sheds_conf.xml as follows; ?xml version=1.0? tool_sheds tool_shed name=Local url=http://127.0.0.1:9009// tool_shed name=Galaxy main tool shed url=http://toolshed.g2.bx.psu.edu// tool_shed name=Galaxy test tool shed url=http://testtoolshed.g2.bx.psu.edu// /tool_sheds # Restart galaxy # Register myself as a user of galaxy # Add database_connection line to community_wsgi.ini - database_connection=sqlite:///./database/community.sqlite?isolation_level=IMMEDIATE # Add admin user to community_wsgi.ini (same as for galaxy admin user) # run sh run_community.sh # Register myself as a user of the local toolshed # Checked out my tools hg clone https://bitbucket.org/iracooke/protk-toolshed # Made a bz2 file of all my tools cd protk-toolshed ./make_package_data.sh # Created a new category on the local toolshed (Proteomics) # Added a tool called protk to the Proteomics category # Uploaded a bz2 file with my tools to the local toolshed. Looking at the tool after upload everything seems fine # Back on galaxy I attempted to load my tools. After clicking the install button in galaxy I get the following traceback (as text) URL: http://127.0.0.1:8080/admin_toolshed/install_repository?tool_shed_url=http://127.0.0.1:9009/repo_info_dict=0a19b3600379cf51a9d8ca26ff0e7754f5e99731:7b2270726f746b223a205b2250726f74656f6d69637320746f6f6c6b6974222c2022687474703a2f2f697261636f6f6b65403132372e302e302e313a393030392f7265706f732f697261636f6f6b652f70726f746b222c2022646330343464623536626634222c202230225d7dincludes_tools=True File '/home/galaxy/central/galaxy-central/eggs/WebError-0.8a-py2.7.egg/weberror/evalexception/middleware.py', line 364 in respond app_iter = self.application(environ, detect_start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/debug/prints.py', line 98 in __call__ environ, self.app) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/wsgilib.py', line 539 in intercept_output app_iter = application(environ, replacement_start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/recursive.py', line 80 in __call__ return self.application(environ, start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/httpexceptions.py', line 632 in __call__ return self.application(environ, start_response) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/framework/base.py', line 160 in __call__ body = method( trans, **kwargs ) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/framework/__init__.py', line 184 in decorator return func( self, trans, *args, **kwargs ) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/controllers/admin_toolshed.py', line 371 in install_repository response = urllib2.urlopen( url ) File '/usr/lib/python2.7/urllib2.py', line 126 in urlopen return _opener.open(url, data, timeout) File '/usr/lib/python2.7/urllib2.py', line 400 in open response = meth(req, response) File '/usr/lib/python2.7/urllib2.py', line 513 in http_response 'http', request, response, code, msg, hdrs) File '/usr/lib/python2.7/urllib2.py', line 438 in error return self._call_chain(*args) File '/usr/lib/python2.7/urllib2.py', line 372 in _call_chain result = func(*args) File '/usr/lib/python2.7/urllib2.py', line 521 in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 503: Service Unavailable I've since tried to figure out where this error actually occurs ... and (according to my crude debugging methods) the actual error originates at line 371 in admin_toolshed.py .. which looks like this .. owner = get_repository_owner( clean_repository_clone_url( repository_clone_url ) ) url = '%s/repository/get_readme?name=%sowner=%schangeset_revision=%swebapp=galaxy' % ( tool_shed_url, name, owner, changeset_revision ) response = urllib2.urlopen( url ) ... so galaxy attempts to fetch a url for the tool readme file ... and that is failing. I've tried adding a readme.txt and that didn't solve the issue. Hopefully this is enough for you to be able to reproduce the error ... I'm mainly just keen to get a workable setup for testing my tools, so a
[galaxy-dev] update_metadata.sh references missing get_python.sh
Hello all, The shell script scripts/cleanup_datasets/update_metadata.sh tries to call non-existent file scripts/get_python.sh What is this intended to do? Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Merging BLAST database support into Galaxy?
Hi Edward, I've started work on this in earnest now. I see you only defined one new datatype, blastdb, which worked for nucleotide databases. I want to handle protein databases too, so I think two datatypes makes sense - which I am currently calling blastdbn and blastdbp. That won't be compatible with your existing tools history, but other than that seems sensible to me. I suppose we could use blastdb and blastdb_p which would match the *.loc files? What do you think? Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] ERROR on History export
Hi Todd, History export is a beta feature and hasn't been fully tested yet. We'll look into this, but it's difficult to diagnose a bug on a local instance. Can you reproduce on either our main or test server? Thanks, J. On Apr 25, 2012, at 8:08 PM, Todd Oakley wrote: Hello, I'd am trying to export histories to upload to another galaxy instance. For some histories, it works fine. However, for others, I get an error. When I turn on debug, I get the error pasted below. Any thoughts on what is the cause of object has no attribute 'hid'? Thanks! Todd Server Error Module paste.exceptions.errormiddleware:143 in __call__ app_iter = self.application(environ, start_response) Module paste.debug.prints:98 in __call__ environ, self.app) Module paste.wsgilib:539 in intercept_output app_iter = application(environ, replacement_start_response) Module paste.recursive:80 in __call__ return self.application(environ, start_response) Module paste.httpexceptions:632 in __call__ return self.application(environ, start_response) Module galaxy.web.framework.base:145 in __call__ body = method( trans, **kwargs ) Module galaxy.web.controllers.history:591 in export_archive history_exp_tool.execute( trans, incoming = params, set_output_hid = True ) Module galaxy.tools:1276 in execute return self.tool_action.execute( self, trans, incoming=incoming, set_output_hid=set_output_hid, history=history, **kwargs ) Module galaxy.tools.actions.history_imp_exp:103 in execute include_deleted=incoming[ 'include_deleted' ] ) Module galaxy.tools.imp_exp:419 in setup_job input_datasets = [ assoc.dataset.hid for assoc in job.input_datasets ] AttributeError: 'NoneType' object has no attribute 'hid' -- *** Todd Oakley, Professor Ecology Evolution and Marine Biology University of California, Santa Barbara Santa Barbara, CA 93106 USA *** ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] 503 Error when adding a tool from local toolshed
Hello Ira, Sorry you bumped into this problem. I've cloned your protk-toolshed repository and made a bz2 compressed archive as you did. I was able to upload it to a local tool shed, and was successful installing from there to my local Galaxy instance, where my environment is running Galaxy change set revision 7116:0cffe389e1b3. This tells me that the problem is most likely due to your running an older version of Galaxy. Are you tracking our galaxy-central repository in your environment? Since the galaxy-dist repository is generally 4-8 weeks older than the galaxy-central repository, many of the features enabling the Galaxy - tool shed communication are not functional in galaxy-dist. If this is not the problem, can you send me the snippet of your paster log from your tool shed that shows the request that produced the 503 error? This should help get things figured out. By the way, from your description, I assume you are not running an Apache front-end on either your Galaxy instance or your tool shed - is this correct? Thanks! Greg Von Kuster On Apr 26, 2012, at 3:41 AM, Ira Cooke wrote: Dear List, I'm trying to check my toolshed tools by installing first on a local toolshed, but running into an error every time I try to add my tools. In order to rule out possible idiosyncracies of my system I've reproduced the error on an Ubuntu VM ... the system was not completely clean ... a few packages required by galaxy and some dependencies of my tools have been installed, but I performed the following steps after creating a fresh galaxy user. # Create new user galaxy # Create galaxy_env via virtualenv.py # Add entry to .bashrc to ensure python in galaxy_env is the default python for the galaxy user # Fresh checkout of galaxy-central # sh run.sh --reload # Add admin user to galaxy by configuring universe_wsgi.ini # Configure tool_sheds_conf.xml as follows; ?xml version=1.0? tool_sheds tool_shed name=Local url=http://127.0.0.1:9009// tool_shed name=Galaxy main tool shed url=http://toolshed.g2.bx.psu.edu// tool_shed name=Galaxy test tool shed url=http://testtoolshed.g2.bx.psu.edu// /tool_sheds # Restart galaxy # Register myself as a user of galaxy # Add database_connection line to community_wsgi.ini - database_connection=sqlite:///./database/community.sqlite?isolation_level=IMMEDIATE # Add admin user to community_wsgi.ini (same as for galaxy admin user) # run sh run_community.sh # Register myself as a user of the local toolshed # Checked out my tools hg clone https://bitbucket.org/iracooke/protk-toolshed # Made a bz2 file of all my tools cd protk-toolshed ./make_package_data.sh # Created a new category on the local toolshed (Proteomics) # Added a tool called protk to the Proteomics category # Uploaded a bz2 file with my tools to the local toolshed. Looking at the tool after upload everything seems fine # Back on galaxy I attempted to load my tools. After clicking the install button in galaxy I get the following traceback (as text) URL: http://127.0.0.1:8080/admin_toolshed/install_repository?tool_shed_url=http://127.0.0.1:9009/repo_info_dict=0a19b3600379cf51a9d8ca26ff0e7754f5e99731:7b2270726f746b223a205b2250726f74656f6d69637320746f6f6c6b6974222c2022687474703a2f2f697261636f6f6b65403132372e302e302e313a393030392f7265706f732f697261636f6f6b652f70726f746b222c2022646330343464623536626634222c202230225d7dincludes_tools=True File '/home/galaxy/central/galaxy-central/eggs/WebError-0.8a-py2.7.egg/weberror/evalexception/middleware.py', line 364 in respond app_iter = self.application(environ, detect_start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/debug/prints.py', line 98 in __call__ environ, self.app) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/wsgilib.py', line 539 in intercept_output app_iter = application(environ, replacement_start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/recursive.py', line 80 in __call__ return self.application(environ, start_response) File '/home/galaxy/central/galaxy-central/eggs/Paste-1.6-py2.7.egg/paste/httpexceptions.py', line 632 in __call__ return self.application(environ, start_response) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/framework/base.py', line 160 in __call__ body = method( trans, **kwargs ) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/framework/__init__.py', line 184 in decorator return func( self, trans, *args, **kwargs ) File '/home/galaxy/central/galaxy-central/lib/galaxy/web/controllers/admin_toolshed.py', line 371 in install_repository response = urllib2.urlopen( url ) File '/usr/lib/python2.7/urllib2.py', line 126 in urlopen return _opener.open(url, data, timeout) File '/usr/lib/python2.7/urllib2.py', line 400 in open response = meth(req, response) File
[galaxy-dev] Composite datatypes and peek and main display
Hello all, I'm looking at 'basic' composite datatypes in Galaxy, based on how Edward did BLAST databases in his BLAST+ fork on the toolshed. I've extended his work to handle both protein and nucleotide databases, and have got this working for tools to create a database (i.e. wrapped makeblastdb) and use a database from the history (e.g. blastp tool). That side of things seems to be working OK. What I am not quite understanding is how to modify the behaviour of these datatypes in the galaxy GUI. Thinks like the peek contents and what happens when clicking on the eye icon are not explained here: http://wiki.g2.bx.psu.edu/Admin/Datatypes/Composite%20Datatypes What I have in mind is to show a simple text summary as the peek or 'eye' display view giving the database name and size (perhaps captured when the database is created, or on the fly via the blastdbcmd command). It seems one option would be to use an 'auto_primary_file' composite instead of 'basic'? Then I can use the generate_primary_file method to show some text? Have I overlooked some documentation? Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Stalled upload jobs under Admin, Manage jobs
On Fri, Mar 16, 2012 at 11:00 AM, Peter Cock p.j.a.c...@googlemail.com wrote: On Mon, Feb 13, 2012 at 5:02 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 10, 2012, at 6:47 AM, Peter Cock wrote: Hello all, I've noticed we have about a dozen stalled upload jobs on our server from several users. e.g. Job ID User Last Update Tool State Command Line Job Runner PID/Cluster ID 2352 21 hours ago upload1 upload None None None ... 2339 19 hours ago upload1 upload None None None The job numbers are consecutive (2339 to 2352) and reflect a problem for a couple of hours yesterday morning. I believe this was due to the underlying file system being unmounted (without restarting Galaxy), and at the time restarting Galaxy fixed uploading files. Test jobs since then have completed normally - but these zombie jobs remain. Using the Stop jobs option does not clear these dead upload jobs. Restarting the Galaxy server does not clear them either. This is our production server and was running galaxy-dist, changeset 5743:720455407d1c - which I have now updated to the current release, 6621:26920e20157f - which makes no difference to these stalled jobs. Does anyone have any insight into what might be wrong, and how to get rid of these zombie tasks? Hi Peter, Are you using the nginx upload module? There's no way to fix these from within Galaxy, unfortunately. You'll have to update them in the database. --nate Hi Nate, Sorry for the delay - I must have missed your reply. No, we're not using nginx here. What should I edit in the database? Presumably rather than deleting these jobs I should set the state to finished with error? (Is there any documentation about the Galaxy database schema, and the values of fields in it - or is that all considered to be an internal detail?) Sorry to nag - my zombie jobs are still there and I'd like a little guidance about how to delete them (e.g. which tables and what status should I change them to). Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] library_import_dir -- How is it supposed to work?
I set library_import_dir to a path and tried uploading a directory of bam files. After fixing the situation so galaxy could find samtools in that subshell, I was able to upload links to the history. But moving things to one directory did not appear to be terribly useful, so I tested what happened if I had subdirectories existing in library import directory. Test 1 I used folders u1 and u2, each with data, and some data in the root library_import_directory. After clearing the samtools eror I was presented with a drop-down list with choices 'None', 'u1' and 'u2'. Selecting 'None' did not result in seeking data but a sharp reminder that I had to pick a directory. Selecting those directories led to uploads, with the Non-Copy correctly sized and even downloadable from the data library, but NOT usable in the history, because upload.py decided the file did not exist (probably the 'path' variable in os.path.exists() line 99). It also became apparent that the upload would look one level down from the root directory and no further (tested by adding u3 with data and a subdirectory of u3 called v3, also with data. So the current state of affairs is that it is a single directory to which one must move files in order to upload links to a galaxy library or folders thereof.. OR, alternatively, make a directory of links called directory A, and then another directory of links to the links in directory A at the library_import_dir and then ask Galaxy to copy the data. (Not fully tested yet). But it is apparent from the UI that more utility was intended. If I have time, I will help with that. Michael Moore ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] New Tool - Can provide a template file to user
Hi all, I am trying to add a tool to galaxy. I want to collect some data from the user. The data may contain hundreds of records each has 7 or 8 fields. Either user can upload a spreadsheet for that or I shall provide a template file to fill up the data. When I checked there is no provision for uploading a speadsheet in my galaxy instance. What about the second method? Is it possible to provide a template file for the user? Deepthi ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] New Tool - Can provide a template file to user
Either user can upload a spreadsheet for that or I shall provide a template file to fill up the data. When I checked there is no provision for uploading a speadsheet in my galaxy instance. Two simple options: (1) have users convert their spreadsheet data to csv/tabular format; Galaxy works well with tabular data. (2) create your own datatype: http://wiki.g2.bx.psu.edu/Admin/Datatypes/Adding%20Datatypes There are many examples for tabular data such as SAM, BED, GFF, etc., in the Galaxy base—see tabular.py and interval.py Best, J.___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] New Tool - Can provide a template file to user
Thank you J. I will try that. Deepthi On 4/26/12, Jeremy Goecks jeremy.goe...@emory.edu wrote: Either user can upload a spreadsheet for that or I shall provide a template file to fill up the data. When I checked there is no provision for uploading a speadsheet in my galaxy instance. Two simple options: (1) have users convert their spreadsheet data to csv/tabular format; Galaxy works well with tabular data. (2) create your own datatype: http://wiki.g2.bx.psu.edu/Admin/Datatypes/Adding%20Datatypes There are many examples for tabular data such as SAM, BED, GFF, etc., in the Galaxy base—see tabular.py and interval.py Best, J. -- Deepthi Theresa Thomas Kannanayakal 1919 University Drive NW, Apt. No. E104 T2N 4K7 Calgary, Alberta Canada Ph: (403) 483 7409, (403) 618 5956 Email: deepthither...@gmail.com ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] What macs version support in Galaxy
On our local installation of Galaxy, we get the following error when running Macs: Usage: macs -t tfile [-n name] [-g genomesize] [options] Example: macs -t ChIP.bam -c Control.bam -f BAM -g h -n test -w --call-subpeaks macs: error: no such option: --lambdaset What is the latest version of macs supported by Galaxy? We are running 1.4.1. Mike Waldron ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Merging BLAST database support into Galaxy?
your suggestion for blastdbn and blastdbp sounds fine. it's okay if a few of our users need to edit the metadata of the dbs in their history. thanks for asking and doing this. On Thu, Apr 26, 2012 at 5:37 AM, Peter Cock p.j.a.c...@googlemail.comwrote: Hi Edward, I've started work on this in earnest now. I see you only defined one new datatype, blastdb, which worked for nucleotide databases. I want to handle protein databases too, so I think two datatypes makes sense - which I am currently calling blastdbn and blastdbp. That won't be compatible with your existing tools history, but other than that seems sensible to me. I suppose we could use blastdb and blastdb_p which would match the *.loc files? What do you think? Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Increasing EBS storage above 1TB for use with Galaxy cloudman.
Hello, I am performing NGS data analysis using Galaxy cloudman. I am working with very large Fastq files that add up and approach the 1TB limit of the EBS volume when I perform my analysis. Is there a way to use multiple EBS volumes with one EC2 instance of Galaxy cloudman. For example, could I direct which EBS volume holds certain files to prevent reaching the 1 TB limit while still allowing my EC2 instance to access and perform jobs on all files? Many thank in advance! Cheers, Mo Heydarian ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/