[galaxy-dev] Manual Tool and dependency installation

2014-05-14 Thread Ravi Alla
Hi All,
Due to the nature of our cluster we are not allowed to have compilers on our 
webserver, which is where our galaxy instance runs. Because of this tool 
dependencies and tools that need to be compiled fail to install through the 
galaxy tool shed webpage. Is there a way for me to install dependencies and 
tools manually? For example I am trying to install the GATK2 wrapper. This 
depends on samtools-0.1.19 from iuc which in turn requires ncurses-5.9. Is 
there instructions somewhere as to where to install these tools such that they 
are tied to the GATK2 wrapper? Pardon me if this is a naive question, but I am 
really out of sorts here.
Thank you
Ravi
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] This tool was disabled before the job completed. Please contact your Galaxy administrator.

2014-04-04 Thread Ravi Alla
Never mind,
I figured it out. I deleted all instances of the job and the temp dataset.
Thanks
R
On Apr 4, 2014, at 2:29 PM, Ravi Alla  wrote:

> Anyone please?? I cannot even start the galaxy server. Makes me wonder if 
> something in the database got messed up.
> On Apr 4, 2014, at 11:16 AM, Ravi Alla  wrote:
> 
>> Hi devs,
>> I deleted a tool called upload_local_file and tried to restart galaxy server 
>> but I keep getting the following error. Any idea how to solve this?
>> 
>>  File 
>> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", 
>> line 39, in app_factory
>> app = UniverseApplication( global_conf = global_conf, **kwargs )
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/app.py", line 130, in 
>> __init__
>> self.job_manager = manager.JobManager( self )
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/manager.py", line 
>> 37, in __init__
>> self.job_handler.start()
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
>> 35, in start
>> self.job_queue.start()
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
>> 78, in start
>> self.__check_jobs_at_startup()
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
>> 109, in __check_jobs_at_startup
>> JobWrapper( job, self ).fail( 'This tool was disabled before the job 
>> completed.  Please contact your Galaxy administrator.' )
>>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/__init__.py", 
>> line 761, in fail
>> self.app.object_store.update_from_file(dataset.dataset, create=True)
>>   File 
>> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", 
>> line 349, in update_from_file
>> self.create(obj, **kwargs)
>>   File 
>> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", 
>> line 298, in create
>> open(path, 'w').close()  # Should be rb?
>> IOError: [Errno 2] No such file or directory: 
>> '/global/scratch/galaxy/webapp/files/000/dataset_194.dat'
>> 
>> Thanks
>> Ravi
> 

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] This tool was disabled before the job completed. Please contact your Galaxy administrator.

2014-04-04 Thread Ravi Alla
Anyone please?? I cannot even start the galaxy server. Makes me wonder if 
something in the database got messed up.
On Apr 4, 2014, at 11:16 AM, Ravi Alla  wrote:

> Hi devs,
> I deleted a tool called upload_local_file and tried to restart galaxy server 
> but I keep getting the following error. Any idea how to solve this?
> 
>  File 
> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", 
> line 39, in app_factory
> app = UniverseApplication( global_conf = global_conf, **kwargs )
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/app.py", line 130, in 
> __init__
> self.job_manager = manager.JobManager( self )
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/manager.py", line 
> 37, in __init__
> self.job_handler.start()
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
> 35, in start
> self.job_queue.start()
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
> 78, in start
> self.__check_jobs_at_startup()
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
> 109, in __check_jobs_at_startup
> JobWrapper( job, self ).fail( 'This tool was disabled before the job 
> completed.  Please contact your Galaxy administrator.' )
>   File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/__init__.py", line 
> 761, in fail
> self.app.object_store.update_from_file(dataset.dataset, create=True)
>   File 
> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", line 
> 349, in update_from_file
> self.create(obj, **kwargs)
>   File 
> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", line 
> 298, in create
> open(path, 'w').close()  # Should be rb?
> IOError: [Errno 2] No such file or directory: 
> '/global/scratch/galaxy/webapp/files/000/dataset_194.dat'
> 
> Thanks
> Ravi

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] This tool was disabled before the job completed. Please contact your Galaxy administrator.

2014-04-04 Thread Ravi Alla
Hi devs,
I deleted a tool called upload_local_file and tried to restart galaxy server 
but I keep getting the following error. Any idea how to solve this?

 File 
"/srv/www/galaxy/source/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", 
line 39, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/app.py", line 130, in 
__init__
self.job_manager = manager.JobManager( self )
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/manager.py", line 
37, in __init__
self.job_handler.start()
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
35, in start
self.job_queue.start()
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
78, in start
self.__check_jobs_at_startup()
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/handler.py", line 
109, in __check_jobs_at_startup
JobWrapper( job, self ).fail( 'This tool was disabled before the job 
completed.  Please contact your Galaxy administrator.' )
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/__init__.py", line 
761, in fail
self.app.object_store.update_from_file(dataset.dataset, create=True)
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", 
line 349, in update_from_file
self.create(obj, **kwargs)
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/objectstore/__init__.py", 
line 298, in create
open(path, 'w').close()  # Should be rb?
IOError: [Errno 2] No such file or directory: 
'/global/scratch/galaxy/webapp/files/000/dataset_194.dat'

Thanks
Ravi___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Galaxy unhandled exception checking jobs error

2014-04-03 Thread Ravi Alla
Hi John,
Thank you so much for this detailed explanation. Before I saw your email I went 
ahead and manually made changes in the pbs.py script. I didn't follow your 
mercurial commands and hope that is not a problem down the road (I guess I 
could use the hg revert option then).This seemed to have solved the problems I 
was seeing before. 
Cheers
Ravi
On Apr 2, 2014, at 7:47 PM, John Chilton  wrote:

> This change is very small and was not committed to the stable branch
> of galaxy-central so I would just modify the your Galaxy's copy of the
> pbs runner file directly until the next release:
> 
> % wget 
> https://bitbucket.org/galaxy/galaxy-central/commits/af5577a24c155fa04aa607ff2fec283634df2fb0/raw
> -O /tmp/pbs.patch
> % hg import --no-commit /tmp/pbs.patch
> 
> When you go to update Galaxy next it will probably warn you that this
> file has been modified - at that time you can just run the following
> command to cleanup your Galaxy instance:
> 
> % hg revert lib/galaxy/jobs/runners/pbs.py
> 
> Hope this helps. The Galaxy team is actively discussing alternative
> ways to distribute fixes between large releases - hopefully I can stop
> e-mailing out random mercurial commands at some point :).
> 
> Back to your broader question however, the Galaxy release and update
> process is such that you really shouldn't lose your configuration as a
> result of updating Galaxy. Galaxy distributes sample tool_conf.xml,
> universe_wsgi.ini, etc... files but the actual configuration files
> themselves are not tracked in the Galaxy central repositories so these
> files should be unaffected by updates. Deviations from this ideal
> should be rare and I believe will be spelled out the dev news for
> releases as they occur. Hope this helps.
> 
> -John
> 
> 
> 
> On Wed, Apr 2, 2014 at 12:27 PM, Ravi Alla  wrote:
>> John,
>> Thanks for this. I am new to managing galaxy, but how do I go about updating 
>> my galaxy to reflect these changes? I found directions on 
>> https://wiki.galaxyproject.org/Admin/GetGalaxy#Keep_your_code_up_to_date 
>> about pulling changes from the bitbucket repository. Does this preserve my 
>> previous galaxy settings?
>> Do I have to back up before I do this? I spent considerable time to get 
>> galaxy to work on the cluster and don't want to ruin it.
>> Thank you
>> Ravi
>> On Apr 2, 2014, at 9:58 AM, John Chilton  wrote:
>> 
>>> This looks very to similar to an issue discussed here:
>>> http://dev.list.galaxyproject.org/pbs-runner-deserializes-server-names-as-unicode-tt4663616.html.
>>> 
>>> The default branch of Galaxy contains changeset that should address
>>> this issue - 
>>> https://bitbucket.org/galaxy/galaxy-central/commits/af5577a24c155fa04aa607ff2fec283634df2fb0.
>>> 
>>> Hope this helps.
>>> 
>>> -John
>>> 
>>> On Wed, Apr 2, 2014 at 11:35 AM, Ravi Alla  wrote:
>>>> Hi guys,
>>>> I keep getting an error everytime I start up the galaxy server. I am
>>>> guessing this has to do with a job that galaxy is trying to resume and
>>>> cannot find.
>>>> 
>>>> galaxy.jobs.runners ERROR 2014-04-02 09:31:10,889 Unhandled exception
>>>> checking active jobs
>>>> Traceback (most recent call last):
>>>> File
>>>> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
>>>> line 366, in monitor
>>>>   self.check_watched_items()
>>>> File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>> line 363, in check_watched_items
>>>>   ( failures, statuses ) = self.check_all_jobs()
>>>> File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>> line 452, in check_all_jobs
>>>>   c = pbs.pbs_connect( pbs_server_name )
>>>> TypeError: in method 'pbs_connect', argument 1 of type 'char *'
>>>> 
>>>> Because of this error I cannot get any other jobs to run either. They just
>>>> sit queued on the cluster.
>>>> Any ideas?
>>>> Thanks
>>>> 
>>>> ___
>>>> Please keep all replies on the list by using "reply all"
>>>> in your mail client.  To manage your subscriptions to this
>>>> and other Galaxy lists, please use the interface at:
>>>> http://lists.bx.psu.edu/
>>>> 
>>>> To search Galaxy mailing lists use the unified search at:
>>>> http://galaxyproject.org/search/mailinglists/
>> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Galaxy unhandled exception checking jobs error

2014-04-02 Thread Ravi Alla
John,
Thanks for this. I am new to managing galaxy, but how do I go about updating my 
galaxy to reflect these changes? I found directions on 
https://wiki.galaxyproject.org/Admin/GetGalaxy#Keep_your_code_up_to_date about 
pulling changes from the bitbucket repository. Does this preserve my previous 
galaxy settings?
Do I have to back up before I do this? I spent considerable time to get galaxy 
to work on the cluster and don't want to ruin it.
Thank you
Ravi
On Apr 2, 2014, at 9:58 AM, John Chilton  wrote:

> This looks very to similar to an issue discussed here:
> http://dev.list.galaxyproject.org/pbs-runner-deserializes-server-names-as-unicode-tt4663616.html.
> 
> The default branch of Galaxy contains changeset that should address
> this issue - 
> https://bitbucket.org/galaxy/galaxy-central/commits/af5577a24c155fa04aa607ff2fec283634df2fb0.
> 
> Hope this helps.
> 
> -John
> 
> On Wed, Apr 2, 2014 at 11:35 AM, Ravi Alla  wrote:
>> Hi guys,
>> I keep getting an error everytime I start up the galaxy server. I am
>> guessing this has to do with a job that galaxy is trying to resume and
>> cannot find.
>> 
>> galaxy.jobs.runners ERROR 2014-04-02 09:31:10,889 Unhandled exception
>> checking active jobs
>> Traceback (most recent call last):
>>  File
>> "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
>> line 366, in monitor
>>self.check_watched_items()
>>  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>> line 363, in check_watched_items
>>( failures, statuses ) = self.check_all_jobs()
>>  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>> line 452, in check_all_jobs
>>c = pbs.pbs_connect( pbs_server_name )
>> TypeError: in method 'pbs_connect', argument 1 of type 'char *'
>> 
>> Because of this error I cannot get any other jobs to run either. They just
>> sit queued on the cluster.
>> Any ideas?
>> Thanks
>> 
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>  http://lists.bx.psu.edu/
>> 
>> To search Galaxy mailing lists use the unified search at:
>>  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Galaxy unhandled exception checking jobs error

2014-04-02 Thread Ravi Alla
Hi guys,
I keep getting an error everytime I start up the galaxy server. I am guessing 
this has to do with a job that galaxy is trying to resume and cannot find.

galaxy.jobs.runners ERROR 2014-04-02 09:31:10,889 Unhandled exception checking 
active jobs
Traceback (most recent call last):
  File 
"/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", line 
366, in monitor
self.check_watched_items()
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", 
line 363, in check_watched_items
( failures, statuses ) = self.check_all_jobs()
  File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", 
line 452, in check_all_jobs
c = pbs.pbs_connect( pbs_server_name )
TypeError: in method 'pbs_connect', argument 1 of type 'char *'

Because of this error I cannot get any other jobs to run either. They just sit 
queued on the cluster.
Any ideas?
Thanks___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Dealing with qiime output

2014-03-25 Thread Ravi Alla
Hello galaxy devs,
I installed qiime-galaxy to my local galaxy (running jobs on a cluster). The 
split_libraries.py tool outputs a dir of files. How do I go about displaying 
these files in the history panel after the tool finishes running? I have been 
trying to look at the composite datatypes guide, but I really am having a hard 
time wrapping my head around it. Should I be creating a new composite datatype 
for any tool that outputs directories (each with diff files to display). Is 
there anyway to have galaxy automatically unzip output directories to history?
Thanks
Ravi
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Common workflow test tools

2014-03-21 Thread Ravi Alla
Hello galaxy community,
I have a question regarding testing tools. I just setup a galaxy server to 
submit jobs to a cluster. Everything seems to be in order so far. Are there 
established workflows with test data that allow you to test a typical analysis 
(ex. RNA-Seq - from raw reads to counts, ChIP-Seq-from raw reads to enriched 
regions, SNP analysis-from raw reads to a VCF file etc). I want to be able run 
my tools on small test sets to make sure everything is fine and dandy, before I 
open it up to some beta testers.

As an aside does galaxy have any dataset size limitations? Does it work equally 
well with an 8GB fastq/bam file compared to a 50-100GB fastq/bam?

Thanks
Ravi
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Persistent jobs in cluster queue even after canceling job in galaxy

2014-03-14 Thread Ravi Alla
What are the best practices for updating galaxy? Also is there a quick command 
I can run to see what version of galaxy I am running?
On Mar 14, 2014, at 12:23 PM, Brian Claywell  wrote:

> On Fri, Mar 14, 2014 at 9:01 AM, John Chilton  wrote:
>> I believe this problem was fixed by Nate after the latest dist release
>> and pushed to the stable branch of galaxy-central.
>> 
>> https://bitbucket.org/galaxy/galaxy-central/commits/1298d3f6aca59825d0eb3d32afd5686c4b1b9294
>> 
>> If you are eager for this bug fix, you can track the latest stable
>> branch of galaxy-central instead of the galaxy-dist tag mentioned in
>> the dev news. Right now it has some other good bug fixes not in the
>> latest release.
> 
> Ah, got it, thanks! Is it unfeasible to push bug fixes like those back
> to galaxy-dist/stable so those of us that would prefer stable to
> bleeding-edge don't have to cherry-pick commits?
> 
> 
> -- 
> Brian Claywell, Systems Analyst/Programmer
> Fred Hutchinson Cancer Research Center
> bclay...@fhcrc.org


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-14 Thread Ravi Alla
Hi Nate 
Thanks for the clarifications. Is there anything specific I should watch out 
for when updating and deleting tools? The reason I ask is that Jennifer 
mentioned in previous emails the need to update metadata for tools, which I am 
sure I follow.
Thanks
Ravi
On Mar 14, 2014, at 9:02 AM, Nate Coraor  wrote:

> On Wed, Mar 5, 2014 at 8:18 PM, Ravi Alla  wrote:
>> Hi Jennifer,
>> Thank you for this information. I was able to troubleshoot the
>> bowtie2_indices. Like Bjoern said the bowtie2 tool needed to be reinstalled
>> because the tool_table does not load the correct indices.
>> 
>> I am still trying to wrap my head around the different toolsheds.
>> 
>> The main tool shed resides in the galaxy-dist folder and uses tool_conf.xml
>> to load tools into the side bar and tool_data_table_conf.xml to load indices
>> (and other data) for the default tools that come with galaxy.
> 
> Hi Ravi,
> 
> The tools in the galaxy-dist directory and controlled via
> tool_conf.xml are not a part of the tool shed. The inclusion of tools
> directly in Galaxy predates the existence of the tool shed. Other than
> that, what you have above is correct.
> 
>> The shed tools reside in the ../shed_tools/ directory and use
>> shed_tools_conf.xml to load tools into the side bar and the
>> shed_tools_data_table_conf.xml to load indices.
>> 
>> The shed_tool xml is setup in the universe.ini file.
> 
> Right.
> 
>> The .loc files for default main tools reside in galaxy-dist/tool-data and
>> the .loc files for the shed_tools reside in
>> ../shed_tools/...
> 
> By default they'll all be under galaxy-dist/tool-data/.  The versions
> in ../shed_tools are the sample location files provided by tool
> authors.
> 
>> And when I uninstall tools and reinstall I would have to update the metadata
>> for that tool.
> 
> I am not sure what you mean by this.
> 
>> If ../shed_tools dir is not present then shed-tools get installed under
>> galaxy-dist/tool-data/toolshed.g2.bx.psu.edu/
> 
> ../shed_tools should be created if it does not exist. Tools won't be
> installed in galaxy-dist/tool-data/
> 
>> And it is always better to install tools as wrappers + dependencies when
>> possible.
> 
> I agree.
> 
> --nate
> 
>> 
>> Am I on the right track with this tool organization?
>> Thanks
>> Ravi
>> 
>> 
>> 
>> On Mar 5, 2014, at 11:06 AM, Jennifer Jackson  wrote:
>> 
>> Hi Ravi,
>> 
>> The directory structure for the installation of ToolShed tools changed,
>> which is why you have three directories. You perhaps had bowtie2 installed
>> once before, then reinstalled (without completely removing the older version
>> and associated data)? Or updated without resetting the metadata? In either
>> case, the ../shed_tools directory is the new one. Having this as the path in
>> your .xml configuration files (as Bjoern suggested earlier) and moving all
>> contents & data to be under the same location will be the simplest global
>> solution ongoing. Links near the end of my reply can help explain how-to.
>> 
>> For the specific reason why bowtie2 indices are not working: I noticed that
>> the reference genome ".fa" file is not linked from the directory containing
>> the indexes. This is required. Adding it in, the same way that you did for
>> the bowtie2 indexes, into this dir:
>> "/global/referenceData/databases/bowtie2/hg19" will probably solve that part
>> of the problem. I didn't see this posted - but I apologize if I am
>> duplicating advice already given.
>> 
>> I also tend to advise keeping all data under the same master "data"
>> directory - all indexes and sequence data - as symbolic links to additional
>> file system paths that are unknown to the 'galaxy user' cause a different
>> set of problems. However, that said, this doesn't seem to be an issue in
>> your specific case: if the bowtie2 indexes are functioning correctly - then
>> environment is set up so that the other dir hierarchy where the .fa files
>> are kept must included in the 'galaxy' user's ENV. Symbolic links that go
>> outside of the local dir structure are known to cause problems unless the
>> ENV config is carefully set up, and to my knowledge are best avoided
>> entirely for certain uses such as "upload by file path" into libraries.
>> 
>> For reference:
>> 
>> NGS data set-up is described in this wiki - including expected content for
>> each type of index:
>> https://wiki.galaxyproject.org/Admin/NGS%

Re: [galaxy-dev] Persistent jobs in cluster queue even after canceling job in galaxy

2014-03-11 Thread Ravi Alla
Hi Peter,
No I am not using that option. It is currently set to false in my 
universe_wgsi.ini file. It says it is a new feature and not recommended, so I 
didn't mess with it.
Thanks
Ravi
On Mar 11, 2014, at 11:27 AM, Peter Cock  wrote:

> Hi Ravi,
> 
> Could you reply to the list?
> 
> And actually I meant do you have "use_tasked_jobs = True" in
> your universe_wsgi.ini file which is linked to the special XML
> tag  in some Galaxy Tools for splitting jobs.
> Sorry, that was unclear of me.
> 
> Peter
> 
> 
> On Tue, Mar 11, 2014 at 6:24 PM, Ravi Alla  wrote:
>> Peter,
>> This is a Centos linux cluster using PBS Torque job manager. By 
>> parallelization are you referring to multiple job handlers? No I am not. I 
>> barely set this server up and it is in its most basic form so far. I will 
>> add options for multiple web servers and runners once I get things working 
>> soundly.
>> Thanks
>> Ravi
>> On Mar 11, 2014, at 11:07 AM, Peter Cock  wrote:
>> 
>>> On Tue, Mar 11, 2014 at 5:57 PM, Ravi Alla  wrote:
>>>> Hi All,
>>>> I've been able to submit jobs to the cluster through galaxy, it works 
>>>> great.
>>>> But when the job is in queue to run (it is gray in the galaxy history pane)
>>>> and I cancel the job, it still remains in queue on the cluster. Why does
>>>> this happen? How can I delete the jobs in queue as well? I tried qdel
>>>>  as galaxy user but it says I am not authenticated to delete the 
>>>> job.
>>>> Any help would be greatly appreciated.
>>>> Thanks
>>>> Ravi.
>>> 
>>> What kind of cluster is it? e.g. SGE?
>>> 
>>> Are you using task splitting (parallelization)?
>>> 
>>> Peter
>> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Persistent jobs in cluster queue even after canceling job in galaxy

2014-03-11 Thread Ravi Alla
Hi All,
I've been able to submit jobs to the cluster through galaxy, it works great. 
But when the job is in queue to run (it is gray in the galaxy history pane) and 
I cancel the job, it still remains in queue on the cluster. Why does this 
happen? How can I delete the jobs in queue as well? I tried qdel  as 
galaxy user but it says I am not authenticated to delete the job.
Any help would be greatly appreciated.
Thanks
Ravi.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Galaxy database size and location

2014-03-06 Thread Ravi Alla
I figured it out. There is an option in the universe.wsgi.ini file called 
file_path which points to database/file now and can be changed to a diff 
location.
Thanks
On Mar 6, 2014, at 1:04 AM, Hans-Rudolf Hotz  wrote:

> Hi Ravi
> 
> I don't quite understand question. It looks like you are mixing up two 
> different things? A few comments, which might clarify and help you:
> 
> - the postgresql db does not store the data. It tracks the users,
>   their jobs and their histories. Hence, it stays pretty small.
> 
> - the actual data is stored in ~/galaxy_dist/database/files/
>   And this directory (or rather its numbered subdirectories) can grow
>   pretty quickly - depending on the kind of jobs you run.
> 
> - there are clean-up scripts which you can use to remove 'deleted'
>   history items (ie the data), see: 
> https://wiki.galaxyproject.org/Admin/Config/Performance/Purge%20Histories%20and%20Datasets
> 
> 
> Hope this helps, Hans-Rudolf
> 
> 
> On 03/06/2014 02:39 AM, Ravi Alla wrote:
>> Hi fellow galaxy devs,
>> 
>> I am trying to understand how to implement the galaxy database and get an 
>> idea of how big it could get. Currently we are running galaxy on a 
>> webserver, and want to have the postgresql db on locally mounted partition 
>> and not on an NFS partition. This limits us to around 100GB of storage for 
>> the db. We will create data libraries for users to load their data without 
>> copying to galaxy, so input files won't be duplicated. Is there anything we 
>> can do about the output files? Do these files need to end up in the database 
>> or can we put them on the NFS partition somewhere with the db holding 
>> information about their location?
>> I noticed that on a routine small analysis I could easily have 20GB or more 
>> of output files and history and all this is in the database.
>> If output files and history files are written to the database, are they 
>> cleaned up daily to avoid storage issues?
>> 
>> Please advise.
>> Thanks
>> Ravi Alla
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>   http://lists.bx.psu.edu/
>> 
>> To search Galaxy mailing lists use the unified search at:
>>   http://galaxyproject.org/search/mailinglists/
>> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Galaxy database size and location

2014-03-06 Thread Ravi Alla
I figured it out. There is an option in the universe.wsgi.ini file called 
file_path which points to database/file now and can be changed to a diff 
location.
Thanks
On Mar 6, 2014, at 1:04 AM, Hans-Rudolf Hotz  wrote:

> Hi Ravi
> 
> I don't quite understand question. It looks like you are mixing up two 
> different things? A few comments, which might clarify and help you:
> 
> - the postgresql db does not store the data. It tracks the users,
>  their jobs and their histories. Hence, it stays pretty small.
> 
> - the actual data is stored in ~/galaxy_dist/database/files/
>  And this directory (or rather its numbered subdirectories) can grow
>  pretty quickly - depending on the kind of jobs you run.
> 
> - there are clean-up scripts which you can use to remove 'deleted'
>  history items (ie the data), see: 
> https://wiki.galaxyproject.org/Admin/Config/Performance/Purge%20Histories%20and%20Datasets
> 
> 
> Hope this helps, Hans-Rudolf
> 
> 
> On 03/06/2014 02:39 AM, Ravi Alla wrote:
>> Hi fellow galaxy devs,
>> 
>> I am trying to understand how to implement the galaxy database and get an 
>> idea of how big it could get. Currently we are running galaxy on a 
>> webserver, and want to have the postgresql db on locally mounted partition 
>> and not on an NFS partition. This limits us to around 100GB of storage for 
>> the db. We will create data libraries for users to load their data without 
>> copying to galaxy, so input files won't be duplicated. Is there anything we 
>> can do about the output files? Do these files need to end up in the database 
>> or can we put them on the NFS partition somewhere with the db holding 
>> information about their location?
>> I noticed that on a routine small analysis I could easily have 20GB or more 
>> of output files and history and all this is in the database.
>> If output files and history files are written to the database, are they 
>> cleaned up daily to avoid storage issues?
>> 
>> Please advise.
>> Thanks
>> Ravi Alla
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>  http://lists.bx.psu.edu/
>> 
>> To search Galaxy mailing lists use the unified search at:
>>  http://galaxyproject.org/search/mailinglists/
>> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Galaxy database size and location

2014-03-05 Thread Ravi Alla
Hi fellow galaxy devs,

I am trying to understand how to implement the galaxy database and get an idea 
of how big it could get. Currently we are running galaxy on a webserver, and 
want to have the postgresql db on locally mounted partition and not on an NFS 
partition. This limits us to around 100GB of storage for the db. We will create 
data libraries for users to load their data without copying to galaxy, so input 
files won't be duplicated. Is there anything we can do about the output files? 
Do these files need to end up in the database or can we put them on the NFS 
partition somewhere with the db holding information about their location? 
I noticed that on a routine small analysis I could easily have 20GB or more of 
output files and history and all this is in the database.
If output files and history files are written to the database, are they cleaned 
up daily to avoid storage issues?

Please advise.
Thanks
Ravi Alla
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-05 Thread Ravi Alla
Hi Jennifer,
Thank you for this information. I was able to troubleshoot the bowtie2_indices. 
Like Bjoern said the bowtie2 tool needed to be reinstalled because the 
tool_table does not load the correct indices.

I am still trying to wrap my head around the different toolsheds.

The main tool shed resides in the galaxy-dist folder and uses tool_conf.xml to 
load tools into the side bar and tool_data_table_conf.xml to load indices (and 
other data) for the default tools that come with galaxy.

The shed tools reside in the ../shed_tools/ directory and use 
shed_tools_conf.xml to load tools into the side bar and the 
shed_tools_data_table_conf.xml to load indices.

The shed_tool xml is setup in the universe.ini file.

The .loc files for default main tools reside in galaxy-dist/tool-data and the 
.loc files for the shed_tools reside in 
../shed_tools/...

And when I uninstall tools and reinstall I would have to update the metadata 
for that tool.

If ../shed_tools dir is not present then shed-tools get installed under 
galaxy-dist/tool-data/toolshed.g2.bx.psu.edu/


And it is always better to install tools as wrappers + dependencies when 
possible.

Am I on the right track with this tool organization?
Thanks
Ravi



On Mar 5, 2014, at 11:06 AM, Jennifer Jackson  wrote:

> Hi Ravi,
> 
> The directory structure for the installation of ToolShed tools changed, which 
> is why you have three directories. You perhaps had bowtie2 installed once 
> before, then reinstalled (without completely removing the older version and 
> associated data)? Or updated without resetting the metadata? In either case, 
> the ../shed_tools directory is the new one. Having this as the path in your 
> .xml configuration files (as Bjoern suggested earlier) and moving all 
> contents & data to be under the same location will be the simplest global 
> solution ongoing. Links near the end of my reply can help explain how-to.
> 
> For the specific reason why bowtie2 indices are not working: I noticed that 
> the reference genome ".fa" file is not linked from the directory containing 
> the indexes. This is required. Adding it in, the same way that you did for 
> the bowtie2 indexes, into this dir: 
> "/global/referenceData/databases/bowtie2/hg19" will probably solve that part 
> of the problem. I didn't see this posted - but I apologize if I am 
> duplicating advice already given. 
> 
> I also tend to advise keeping all data under the same master "data" directory 
> - all indexes and sequence data - as symbolic links to additional file system 
> paths that are unknown to the 'galaxy user' cause a different set of 
> problems. However, that said, this doesn't seem to be an issue in your 
> specific case: if the bowtie2 indexes are functioning correctly - then 
> environment is set up so that the other dir hierarchy where the .fa files are 
> kept must included in the 'galaxy' user's ENV. Symbolic links that go outside 
> of the local dir structure are known to cause problems unless the ENV config 
> is carefully set up, and to my knowledge are best avoided entirely for 
> certain uses such as "upload by file path" into libraries.
> 
> For reference:
> 
> NGS data set-up is described in this wiki - including expected content for 
> each type of index:
> https://wiki.galaxyproject.org/Admin/NGS%20Local%20Setup
> 
> For examples, you can rsync a genome or two and examine the contents. Or, our 
> /location dir and have a look at the .loc files. 
> https://wiki.galaxyproject.org/Admin/DataIntegration
> 
> Tool Shed help (very detailed):
> https://wiki.galaxyproject.org/Tool%20Shed
> In particular, if you had previously installed repositories (this is not 
> clear, just suspected from the duplications), updating the Metadata with 
> certain distribution updates can be very important. This has been necessary 
> for the last few releases to update to changes. The News Brief noted this, 
> and included a link to this wiki page. Also see the "Related Pages" lower 
> down on the wiki.
> https://wiki.galaxyproject.org/ResettingMetadataForInstalledRepositories
> This may also be useful:
> https://wiki.galaxyproject.org/RepairingInstalledRepositories
> 
> Hopefully you have sorted most of this out by now, or this helps!
> 
> Jen
> Galaxy team
> 
> 
> On 3/4/14 12:21 PM, Ravi Alla wrote:
>> Bjoern,
>> This is getting frustrating.
>> There are three places bowtie2_indices.loc file is expected. I really don't 
>> know which one is the one I should modify.
>> 
>> galaxy-dist/tool-data/bowtie2_indices.loc
>> galaxy-dist/tool-data/toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/96d2e31a3938/bowtie2_indices.loc
>> ../shed_tools/toolshed.g2

Re: [galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-04 Thread Ravi Alla
Bjoern,
This is getting frustrating.
There are three places bowtie2_indices.loc file is expected. I really don't 
know which one is the one I should modify.

galaxy-dist/tool-data/bowtie2_indices.loc
galaxy-dist/tool-data/toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/96d2e31a3938/bowtie2_indices.loc
../shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/96d2e31a3938/bowtie2/tool-data/bowtie2_indices.loc

What is the difference between these 3 files?

I changed the 2nd file to include the path to the indices and this shows up as 
a valid preloaded index for tophat2 but does not for bowtie2. I also changed 
the normal bowtie_indices.loc file at the second location and this works and I 
can see the bowtie1 indices for that bowtie tool. Currently only the bowtie2 
indices are acting up.

I wish this was easier.
Thanks
Ravi.
On Mar 4, 2014, at 11:28 AM, Björn Grüning  wrote:

> Hi,
> 
> can you check if the file tool_data_table_conf.xml contains an entry about 
> bowtie2_indexes ...  or the table shed_data_table_conf.xml ... in one of them 
> should be one entry with bowtie2_indexes.
> 
> That warning should not be there I think:
> 
> > galaxy.tools.parameters.dynamic_options WARNING 2014-03-04 10:14:33,335 
> > Data table named 'bowtie2_indexes' is required by tool but not configured
> 
> Hope you can figure it out,
> Bjoern
> 
> Am 04.03.2014 19:19, schrieb Ravi Alla:
>> Hi Bjoern,
>> Please find the bowtie2 and bowtie .loc files. In the paster.log file I see 
>> these entries for bowtie
>> 
>> galaxy.tools.data DEBUG 2014-03-04 10:14:11,048 Loaded tool data table 
>> 'bowtie_indexes'
>> galaxy.tools.data DEBUG 2014-03-04 10:14:11,051 Loading another instance of 
>> data table 'bowtie_indexes', attempting to merge content.
>> galaxy.tools.parameters.dynamic_options WARNING 2014-03-04 10:14:33,335 Data 
>> table named 'bowtie2_indexes' is required by tool but not configured
>> 
>> And ls -l /global/referenceData/databases/bowtie2/hg19 is
>> drwxr-xr-x 2 ralla cgrl  2048 Sep 10 11:14 .
>> drwxr-xr-x 3 ralla cgrl  2048 Sep 10 11:06 ..
>> -rw-rw-r-- 1 ralla cgrl 960018873 May  2  2012 hg19.1.bt2
>> -rw-rw-r-- 1 ralla cgrl 716863572 May  2  2012 hg19.2.bt2
>> -rw-rw-r-- 1 ralla cgrl  3833 May  2  2012 hg19.3.bt2
>> -rw-rw-r-- 1 ralla cgrl 716863565 May  2  2012 hg19.4.bt2
>> -rw-rw-r-- 1 ralla cgrl 960018873 May  2  2012 hg19.rev.1.bt2
>> -rw-rw-r-- 1 ralla cgrl 716863572 May  2  2012 hg19.rev.2.bt2
>> 
>> and ls -l /global/referenceData/databases/bowtie/hg19 is
>> drwxr-xr-x 2 ralla cgrl  2048 Mar  4 10:07 .
>> drwxr-xr-x 3 ralla cgrl  2048 Nov 30  2012 ..
>> -rw-rw-r-- 1 ralla cgrl 821725563 Nov 13  2009 hg19.1.ebwt
>> -rw-rw-r-- 1 ralla cgrl 357667968 Nov 13  2009 hg19.2.ebwt
>> -rw-rw-r-- 1 ralla cgrl  3284 Nov 13  2009 hg19.3.ebwt
>> -rw-rw-r-- 1 ralla cgrl 715335926 Nov 13  2009 hg19.4.ebwt
>> lrwxrwxrwx 1 ralla cgrl45 Aug 21  2013 hg19.fa -> 
>> /global/referenceData/genomes/hs/hg19/hg19.fa
>> -rw-rw-r-- 1 ralla cgrl 821725563 Nov 13  2009 hg19.rev.1.ebwt
>> -rw-rw-r-- 1 ralla cgrl 357667968 Nov 13  2009 hg19.rev.2.ebwt
>> 
>> Thanks for looking into this.
>> Ravi.
>> On Mar 4, 2014, at 9:13 AM, Björn Grüning  wrote:
>> 
>>> Hi Ravi,
>>> 
>>> can you attach the loc file, do you see anything in the Galaxy log files 
>>> about bowtie2, try grepping for "bowtie2".
>>> 
>>> Cheers,
>>> Bjoern
>>> 
>>> Am 04.03.2014 18:07, schrieb Ravi Alla:
>>>> Hi Bjoern,
>>>> Thank you for your clarifications. I have tried installing bowtie2 using 
>>>> both variations, once with both wrapper and dependencies and once with 
>>>> only wrapper (since I have dependencies installed on the system). In 
>>>> either case even after editing the .loc file I cannot see the indices in 
>>>> galaxy. I tried the same edits to .loc file with bwa and the indices show 
>>>> right up, but not with bowtie2 and even bowtie for that matter. I really 
>>>> don't know what to do about this.
>>>> Thanks
>>>> Ravi
>>>> On Mar 4, 2014, at 3:06 AM, Björn Grüning  
>>>> wrote:
>>>> 
>>>>> Hi Ravi,
>>>>> 
>>>>>> Hi guys,
>>>>>> I am new to galaxy and am in the process of setting it up on a cluster 
>>>>>> with some help. I am trying to install bowtie2 to my local galaxy 
>>>>>> through the toolshed. I have a few questions.
>>>>>> 

Re: [galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-04 Thread Ravi Alla
Hi Bjoern,
Please find the bowtie2 and bowtie .loc files. In the paster.log file I see 
these entries for bowtie

galaxy.tools.data DEBUG 2014-03-04 10:14:11,048 Loaded tool data table 
'bowtie_indexes'
galaxy.tools.data DEBUG 2014-03-04 10:14:11,051 Loading another instance of 
data table 'bowtie_indexes', attempting to merge content.
galaxy.tools.parameters.dynamic_options WARNING 2014-03-04 10:14:33,335 Data 
table named 'bowtie2_indexes' is required by tool but not configured

And ls -l /global/referenceData/databases/bowtie2/hg19 is 
drwxr-xr-x 2 ralla cgrl  2048 Sep 10 11:14 .
drwxr-xr-x 3 ralla cgrl  2048 Sep 10 11:06 ..
-rw-rw-r-- 1 ralla cgrl 960018873 May  2  2012 hg19.1.bt2
-rw-rw-r-- 1 ralla cgrl 716863572 May  2  2012 hg19.2.bt2
-rw-rw-r-- 1 ralla cgrl  3833 May  2  2012 hg19.3.bt2
-rw-rw-r-- 1 ralla cgrl 716863565 May  2  2012 hg19.4.bt2
-rw-rw-r-- 1 ralla cgrl 960018873 May  2  2012 hg19.rev.1.bt2
-rw-rw-r-- 1 ralla cgrl 716863572 May  2  2012 hg19.rev.2.bt2

and ls -l /global/referenceData/databases/bowtie/hg19 is
drwxr-xr-x 2 ralla cgrl  2048 Mar  4 10:07 .
drwxr-xr-x 3 ralla cgrl  2048 Nov 30  2012 ..
-rw-rw-r-- 1 ralla cgrl 821725563 Nov 13  2009 hg19.1.ebwt
-rw-rw-r-- 1 ralla cgrl 357667968 Nov 13  2009 hg19.2.ebwt
-rw-rw-r-- 1 ralla cgrl  3284 Nov 13  2009 hg19.3.ebwt
-rw-rw-r-- 1 ralla cgrl 715335926 Nov 13  2009 hg19.4.ebwt
lrwxrwxrwx 1 ralla cgrl45 Aug 21  2013 hg19.fa -> 
/global/referenceData/genomes/hs/hg19/hg19.fa
-rw-rw-r-- 1 ralla cgrl 821725563 Nov 13  2009 hg19.rev.1.ebwt
-rw-rw-r-- 1 ralla cgrl 357667968 Nov 13  2009 hg19.rev.2.ebwt

Thanks for looking into this.
Ravi.
On Mar 4, 2014, at 9:13 AM, Björn Grüning  wrote:

> Hi Ravi,
> 
> can you attach the loc file, do you see anything in the Galaxy log files 
> about bowtie2, try grepping for "bowtie2".
> 
> Cheers,
> Bjoern
> 
> Am 04.03.2014 18:07, schrieb Ravi Alla:
>> Hi Bjoern,
>> Thank you for your clarifications. I have tried installing bowtie2 using 
>> both variations, once with both wrapper and dependencies and once with only 
>> wrapper (since I have dependencies installed on the system). In either case 
>> even after editing the .loc file I cannot see the indices in galaxy. I tried 
>> the same edits to .loc file with bwa and the indices show right up, but not 
>> with bowtie2 and even bowtie for that matter. I really don't know what to do 
>> about this.
>> Thanks
>> Ravi
>> On Mar 4, 2014, at 3:06 AM, Björn Grüning  wrote:
>> 
>>> Hi Ravi,
>>> 
>>>> Hi guys,
>>>> I am new to galaxy and am in the process of setting it up on a cluster 
>>>> with some help. I am trying to install bowtie2 to my local galaxy through 
>>>> the toolshed. I have a few questions.
>>>> 
>>>> - Which bowtie2 should I install? When I search for bowtie2 a bunch of 
>>>> results come up?
>>> 
>>> Only two, or? Please make sure you are using the main toolshed. bowtie2 and 
>>> package_bowtie2_2_1_0 are connected together. One is the binary and the 
>>> other contains the wrapper. Use devteam respositories is always a good 
>>> choice.
>>> 
>>>> - What are repository dependencies? Why would I need these if bowtie2 is 
>>>> already installed on my system?
>>> 
>>> repository dependecies containing all dependencies of your wrappers, that 
>>> can be some binaries, but also R, Perl, python libraries that are needed to 
>>> execute your wrappers. We recommend you to use the dependencies because 
>>> then you have full control over all your used versions (wrapper, 
>>> dependencies, tool) and that enables reproducibility. To put in other 
>>> words: With the toolshed you can have different tool and wrappers versions 
>>> with different dependencies at the same time.
>>> 
>>>> - I have been installing bowtie2 by unchecking the repository dependencies 
>>>> and tool dependencies boxes. Is this correct?
>>> 
>>> It will work, if you have a system installed version. But you need to take 
>>> care about binary updates by your own. And reproducibility of your results 
>>> is not guaranteed.
>>> 
>>>> - Finally when I install bowtie2 this way it creates 2 bowtie2_indices.loc 
>>>> files, one in galaxy/tool-data and the other in 
>>>> shed_tools/toolshed.g2.gx.psu.edu/dev/bowtie2, which one do I need to edit 
>>>> to point to my index files? No matter which one I change I can't seem to 
>>>> see the indices.
>>> 
>>> That is strange. Are you sure you don't have a 

Re: [galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-04 Thread Ravi Alla
Hi Bjoern,
Thank you for your clarifications. I have tried installing bowtie2 using both 
variations, once with both wrapper and dependencies and once with only wrapper 
(since I have dependencies installed on the system). In either case even after 
editing the .loc file I cannot see the indices in galaxy. I tried the same 
edits to .loc file with bwa and the indices show right up, but not with bowtie2 
and even bowtie for that matter. I really don't know what to do about this.
Thanks
Ravi
On Mar 4, 2014, at 3:06 AM, Björn Grüning  wrote:

> Hi Ravi,
> 
>> Hi guys,
>> I am new to galaxy and am in the process of setting it up on a cluster with 
>> some help. I am trying to install bowtie2 to my local galaxy through the 
>> toolshed. I have a few questions.
>> 
>> - Which bowtie2 should I install? When I search for bowtie2 a bunch of 
>> results come up?
> 
> Only two, or? Please make sure you are using the main toolshed. bowtie2 and 
> package_bowtie2_2_1_0 are connected together. One is the binary and the other 
> contains the wrapper. Use devteam respositories is always a good choice.
> 
>> - What are repository dependencies? Why would I need these if bowtie2 is 
>> already installed on my system?
> 
> repository dependecies containing all dependencies of your wrappers, that can 
> be some binaries, but also R, Perl, python libraries that are needed to 
> execute your wrappers. We recommend you to use the dependencies because then 
> you have full control over all your used versions (wrapper, dependencies, 
> tool) and that enables reproducibility. To put in other words: With the 
> toolshed you can have different tool and wrappers versions with different 
> dependencies at the same time.
> 
>> - I have been installing bowtie2 by unchecking the repository dependencies 
>> and tool dependencies boxes. Is this correct?
> 
> It will work, if you have a system installed version. But you need to take 
> care about binary updates by your own. And reproducibility of your results is 
> not guaranteed.
> 
>> - Finally when I install bowtie2 this way it creates 2 bowtie2_indices.loc 
>> files, one in galaxy/tool-data and the other in 
>> shed_tools/toolshed.g2.gx.psu.edu/dev/bowtie2, which one do I need to edit 
>> to point to my index files? No matter which one I change I can't seem to see 
>> the indices.
> 
> That is strange. Are you sure you don't have a spelling mistake in it. It 
> should be the bowtie2_indices.los I think. I will CC Greg, he is one of the 
> Tool Shed Developers and should know more about it.
> Please do not forget to restart your Galaxy instance once you have update the 
> *loc files.
> 
>> I hope someone on here can help me out.
> 
> Hope it helped a little bit,
> Bjoern
> 
>> Thanks a lot
>> Ravi.
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>   http://lists.bx.psu.edu/
>> 
>> To search Galaxy mailing lists use the unified search at:
>>   http://galaxyproject.org/search/mailinglists/
>> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] How to install bowtie2 tool in galaxy

2014-03-03 Thread Ravi Alla
Hi guys,
I am new to galaxy and am in the process of setting it up on a cluster with 
some help. I am trying to install bowtie2 to my local galaxy through the 
toolshed. I have a few questions.

- Which bowtie2 should I install? When I search for bowtie2 a bunch of results 
come up?
- What are repository dependencies? Why would I need these if bowtie2 is 
already installed on my system?
- I have been installing bowtie2 by unchecking the repository dependencies and 
tool dependencies boxes. Is this correct?
- Finally when I install bowtie2 this way it creates 2 bowtie2_indices.loc 
files, one in galaxy/tool-data and the other in 
shed_tools/toolshed.g2.gx.psu.edu/dev/bowtie2, which one do I need to edit to 
point to my index files? No matter which one I change I can't seem to see the 
indices.

I hope someone on here can help me out.
Thanks a lot
Ravi.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/