[galaxy-dev] Next GalaxyAdmins Meetup: November 20

2013-11-14 Thread Dave Clements
Hello all,

The next GalaxyAdmins meetup will be Wednesday, November 20, at 10am US
Central time.  See the meetup
pagehttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/2013_11_20
for
a link and directions for connecting to the meetup.



*GCC2013 GalaxyAdmins BoF Followup*

This is our first meetup since the GCC2013 GalaxyAdmins
BoFhttp://wiki.galaxyproject.org/Events/GCC2013/BoF/GalaxyAdmins
where
we discussed what the group should focus on going forward, and what the
Galaxy Project can do to support the group.

As suggested at GCC2013, followup to that discussion will be the main topic
for this meetup.  We came out with several action items and one piece of
unfinished business (leadership).

Also, as suggested at the GCC2013 BoF, we would like to encourage
discussion in the week before the meetup.  Therefore,

   1. Please review the notes from the GCC2013 GalaxyAdmins
BoFhttp://wiki.galaxyproject.org/Events/GCC2013/BoF/GalaxyAdmins
   2. Take a look at these two draft implementations of action items from
   that discussion:
   1. Galaxy Deployment
Pageshttp://wiki.galaxyproject.org/Community/Deployments
  2. Galaxy Log Pages http://wiki.galaxyproject.org/Community/Logs

And, if you see anything that you want to comment on please reply to this
thread on the Galaxy-Dev list.

I'll update other actions as the call gets closer.


*Galaxy Project Update: Main moves to TACC*

Nate Coraor will give the project update, focusing on the recent move of
UseGalaxy.org to TACC.


We hope to see (well, hear) you there, and please don't hesitate to ask if
you have any questions.

Thanks,

Dave C.


-- 
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] BAM/BAI index file test problem on (Test) Tool Shed

2013-11-14 Thread Peter Cock
On Wed, Nov 13, 2013 at 12:03 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hello Peter,

 Yesterday we discovered what Björn has communicated and added it to the
 Trello card for this issue:

 https://trello.com/c/sN2iLCCn/99-bug-in-install-and-test-framework-1

 ...

 becomes the following in your renamed tool_dependencies.xml file:

 tool_dependency
 package name=samtools version=0.1.9
 repository name=package_samtools_0_1_19 owner=iic /
 /package
 /tool_dependency

 This is all discussed in the following section of the Tool Shed wiki:

 http://wiki.galaxyproject.org/ToolShedToolFeatures#Automatic_third-party_tool_dependency_installation_and_compilation_with_installed_repositories

I believe that is what I did in response to Bjoern's email, shortly
before seeing your email (but thank you for the detailed reply):

http://testtoolshed.g2.bx.psu.edu/view/peterjc/samtools_idxstats/93b8db68dde4

As an aside, I personally I find the tool_dependencies.xml vs
repository_dependecies.xml split confusing when defining a
dependency on a third party tool which is provided by another
Tool Shed repository. I don't understand why this isn't all done
with repository_dependecies.xml alone.


 Let me know if this doesn't correct this problem.


Last night's Test Tool Shed run confirms this does NOT fix the problem,
which does indeed appear to be in the Galaxy upload tool which has
an implicit dependency on samtools for indexing BAM files as John
suggested.

Regards,

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Tool Shed packages for BLAST+ binaries

2013-11-14 Thread Peter Cock
On Tue, Nov 12, 2013 at 3:33 AM, Dave Bouvier d...@bx.psu.edu wrote:
 Peter,

 Thanks for bringing this to our attention, we're working on fixing a number
 of issues with the test framework, and hope to have more information for you
 tomorrow.

  --Dave B.

Hi Dave,

Good news, the BLAST+ tests appear to have all passed on the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/f2478dc77ccb

Tool test results
Automated test environment
Time tested: ~ 5 hours ago
System: Linux 3.8.0-30-generic
Architecture: x86_64
Python version: 2.7.4
Galaxy revision: 11318:7553213e0646
Galaxy database version: 117
Tool shed revision:
Tool shed database version:
Tool shed mercurial version:
Tests that passed successfully
Tool id: blastxml_to_tabular
Tool version: blastxml_to_tabular
Test: test_tool_00
(functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/blastxml_to_tabular/0.0.11)
...
Tool id: ncbi_tblastx_wrapper
Tool version: ncbi_tblastx_wrapper
Test: test_tool_00
(functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_tblastx_wrapper/0.0.21)

Curiously however, this no longer seems to be complaining about
the BLAST+ tools without any tests - a new bug?

Over on the main Tool Shed, the binary installation seems to be failing
(still using the bash script magic - is the test system still missing bash,
or is there a different problem here?). Here too, there doesn't seem to
be any mention of the tools missing tests.

At this point (given it is working on the Test Tool Shed), I think it should
be safe to update the BLAST+ packages to use the new architecture/os
specific action tags (a recent feature which is now supported in the
stable Galaxy releases):

http://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_2_26
http://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_2_27
http://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_2_28

Any objections?

Thanks,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Tool Shed packages for BLAST+ binaries

2013-11-14 Thread Nate Coraor
Bash is easily obtained on these systems and I think the extra functionality 
available in post-Bourne shells ought to be allowed.  I also question how many 
people are going to run tools on *BSD since most underlying analysis tools tend 
to only target Linux.

That said, shell code should be restricted to Bourne-compatible syntax whenever 
there’s no good reason to use non-Bourne features, e.g. if all you’re doing is 
`export FOO=foo`, you should not be forcing the use of bash.  In cases where 
bash really is required (say, you’re using arrays), the script should 
explicitly specify '#!/bin/bash' (or '#!/usr/bin/env bash'?) rather than 
'#!/bin/sh'.

—nate

On Nov 11, 2013, at 11:04 AM, James Taylor ja...@jamestaylor.org wrote:

 This is not an objection, but do we need bash? Can we live with posix
 sh? We should ask this question about every requirement we introduce.
 
 (bash is not part of the default installation of FreeBSD or OpenBSD
 for example. bash is unfortunately licensed under GPLV3, so if you are
 trying to create an OS not polluted by viral licensing you don't get
 bash).
 
 On Mon, Oct 7, 2013 at 11:36 AM, John Chilton chil...@msi.umn.edu wrote:
 My own preference is that we specify at least /bin/sh and /bin/bash
 are available before utilizing the tool shed. Is there an objection to
 this from any corner? Is there realistically a system that Galaxy
 should support that will not have /bin/bash available?
 
 --
 James Taylor, Associate Professor, Biology/CS, Emory University
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Tool Shed packages for BLAST+ binaries

2013-11-14 Thread Peter Cock
On Thu, Nov 14, 2013 at 5:13 PM, Nate Coraor n...@bx.psu.edu wrote:
 Bash is easily obtained on these systems and I think the extra functionality
 available in post-Bourne shells ought to be allowed.  I also question how
 many people are going to run tools on *BSD since most underlying
 analysis tools tend to only target Linux.

So could bash be declared an expected Galaxy dependency? i.e. The
core system libraries and tools which Tool Authors may assume, and
which Galaxy Administrators should install?

 That said, shell code should be restricted to Bourne-compatible syntax
 whenever there’s no good reason to use non-Bourne features, e.g. if
 all you’re doing is `export FOO=foo`, you should not be forcing the use
 of bash.  In cases where bash really is required (say, you’re using
 arrays), the script should explicitly specify '#!/bin/bash' (or 
 '#!/usr/bin/env
 bash'?) rather than '#!/bin/sh'.

I agree that any shell script (e.g. a tool wrapper) which is bash specific
should say that in the hash-bang line, rather than '#!/bin/sh'.

What about command line magic like -num_threads \${GALAXY_SLOTS:-8}
in a command tag using bash specific environment variable default values?

What about bash specific if statements in action type=shell_command
sections of a tool_dependencies.xml file (which is what the BLAST+
packages currently use on the main Tool Shed, pending an update
to use arch/os specific tags as tested on the Test Tool Shed)?

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] BAM/BAI index file test problem on (Test) Tool Shed

2013-11-14 Thread Greg Von Kuster
Hi Peter,


On Nov 14, 2013, at 11:27 AM, Peter Cock p.j.a.c...@googlemail.com wrote:

 On Wed, Nov 13, 2013 at 12:03 PM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hello Peter,
 
 Yesterday we discovered what Björn has communicated and added it to the
 Trello card for this issue:
 
 https://trello.com/c/sN2iLCCn/99-bug-in-install-and-test-framework-1
 
 ...
 
 becomes the following in your renamed tool_dependencies.xml file:
 
 tool_dependency
package name=samtools version=0.1.9
repository name=package_samtools_0_1_19 owner=iic /
/package
 /tool_dependency
 
 This is all discussed in the following section of the Tool Shed wiki:
 
 http://wiki.galaxyproject.org/ToolShedToolFeatures#Automatic_third-party_tool_dependency_installation_and_compilation_with_installed_repositories
 
 I believe that is what I did in response to Bjoern's email, shortly
 before seeing your email (but thank you for the detailed reply):
 
 http://testtoolshed.g2.bx.psu.edu/view/peterjc/samtools_idxstats/93b8db68dde4
 
 As an aside, I personally I find the tool_dependencies.xml vs
 repository_dependecies.xml split confusing when defining a
 dependency on a third party tool which is provided by another
 Tool Shed repository. I don't understand why this isn't all done
 with repository_dependecies.xml alone.

If tool dependencies were defined in a repository_dependencies.xml file, the 
definition would be more complex than the current approach we are using. 

1)  A simple repository dependency defines a relationship between 2 
repositories and does not consider any of the contents of either repository.  
This relationship is defined in a repository_dependencies.xml file.  A good 
xample is the following repository_dependencies.xml file associated with the 
emboss5 repository at:  
http://devt...@testtoolshed.g2.bx.psu.edu/repos/devteam/emboss_5

The relationship is between the emboss5 repository and its required repository 
named emboss_datatypes.

?xml version=1.0?
repositories description=Emboss 5 requires the Galaxy applicable data formats 
used by Emboss tools.
repository toolshed=http://testtoolshed.g2.bx.psu.edu; 
name=emboss_datatypes owner=devteam changeset_revision=9f36ad2af086 /
/repositories


2)  A complex repository dependency defines a relationship between some of 
the contents of each of 2 repositories ( the relationship is usually between a 
tool (e.g., xml config and script combination) in the dependent repository and 
a tool component (e.g., 3rd party binary) in the required repository.  This 
relationship is defined in a tool_dependencies.xml file.  A good example is the 
following tool_dependencies.xml file in the same emboss5 repository at:
http://devt...@testtoolshed.g2.bx.psu.edu/repos/devteam/emboss_5

?xml version=1.0?
tool_dependency
package name=emboss version=5.0.0
repository changeset_revision=9fd501d0f295 
name=package_emboss_5_0_0 owner=devteam 
toolshed=http://testtoolshed.g2.bx.psu.edu; /
/package
/tool_dependency

The above definition defines only a portion of the relationship to the contents 
of the emboss5 repository.  The remaing portion is the contents of the 
requirements tag set in the contained tool(s):

requirementsrequirement type=package 
version=5.0.0emboss/requirement/requirements

For each contained tool that has the above requirement tag entry, the tool 
will find the binary dependency installed with the required 
package_emboss_5_0_0 repository when the tool is executed.  Since this 
relationship is defined at the tool level and not the repository leve, it 
is defined in the tool_dependenciews.xml file.

All of this information is explained in the Tool Shed wiki at the following 
pages.

http://wiki.galaxyproject.org/DefiningRepositoryDependencies#Simple_repository_dependencies

http://wiki.galaxyproject.org/DefiningRepositoryDependencies#Complex_repository_dependencies:_tool_dependency_definitions_that_contain_repository_dependency_definitions


 
 
 Let me know if this doesn't correct this problem.
 
 
 Last night's Test Tool Shed run confirms this does NOT fix the problem,
 which does indeed appear to be in the Galaxy upload tool which has
 an implicit dependency on samtools for indexing BAM files as John
 suggested.

Yes, we're working extensively on the Tool Shed's install and test framework, 
and we'll figure out the best way to handle this issue.

Thanks!

 
 Regards,
 
 Peter
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
 

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use 

Re: [galaxy-dev] Tool Shed packages for BLAST+ binaries

2013-11-14 Thread Nate Coraor
On Nov 14, 2013, at 12:21 PM, Peter Cock p.j.a.c...@googlemail.com wrote:

 On Thu, Nov 14, 2013 at 5:13 PM, Nate Coraor n...@bx.psu.edu wrote:
 Bash is easily obtained on these systems and I think the extra functionality
 available in post-Bourne shells ought to be allowed.  I also question how
 many people are going to run tools on *BSD since most underlying
 analysis tools tend to only target Linux.
 
 So could bash be declared an expected Galaxy dependency? i.e. The
 core system libraries and tools which Tool Authors may assume, and
 which Galaxy Administrators should install?

It’s my opinion that it could.  I’ll start a discussion about this shortly so 
we can hammer out the rest.

 
 That said, shell code should be restricted to Bourne-compatible syntax
 whenever there’s no good reason to use non-Bourne features, e.g. if
 all you’re doing is `export FOO=foo`, you should not be forcing the use
 of bash.  In cases where bash really is required (say, you’re using
 arrays), the script should explicitly specify '#!/bin/bash' (or 
 '#!/usr/bin/env
 bash'?) rather than '#!/bin/sh'.
 
 I agree that any shell script (e.g. a tool wrapper) which is bash specific
 should say that in the hash-bang line, rather than '#!/bin/sh'.
 
 What about command line magic like -num_threads \${GALAXY_SLOTS:-8}
 in a command tag using bash specific environment variable default values?

${FOO:-default} is, surprisingly, Bourne-compatible.

 What about bash specific if statements in action type=shell_command
 sections of a tool_dependencies.xml file (which is what the BLAST+
 packages currently use on the main Tool Shed, pending an update
 to use arch/os specific tags as tested on the Test Tool Shed)?

This is a bit tricker, since there’s currently no way to specify installation 
actions to run on a system other than the Galaxy application server.  However, 
if we are saying that bash should be available to tools, I’d think there is no 
reason to say that it’s not expected to be available to tool installation.  
Unfortunately, I believe the current installation methods use subprocesses’ 
default, which is sh, which is not going to be bash on some systems (on Debian, 
it’s dash).

—nate

 
 Peter
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Fwd: Managing Data Locality

2013-11-14 Thread John Chilton
Forgot to cc galaxy-dev on this, it is too riveting not to post :).

-John

-- Forwarded message --
From: John Chilton chil0...@umn.edu
Date: Thu, Nov 14, 2013 at 10:00 AM
Subject: Re: [galaxy-dev] Managing Data Locality
To: Paniagua, Eric epani...@cshl.edu


Hey Eric,

  Sorry for the delayed response, I have pushed some updates to
galaxy-central and the lwr to close some loop and fill out
documentation based on your comments.

  I worry my last e-mail didn't make it clear that what you want to do
is very ... ambitious. I didn't mean to make it sound like this was a
solved problem, just that it was a problem that people were working on
various parts of. The additional wrinkle that you would like run these
jobs as the actual user is another significant hurdle.  All of that
said, you are certainly not alone or unreasonable in wanting this
functionality - many large computing centers have very similar use
cases and have made various degree of progress including my former
employer the Minnesota Supercomputing Institute and I doubt anyone is
currently using the LWR in this capacity at such centers (it is still
mostly used for submitting jobs to Windows servers).

On Fri, Nov 8, 2013 at 4:04 PM, Paniagua, Eric epani...@cshl.edu wrote:
 Hi John,

 I have now read the top-level documentation for LWR, and gone through the 
 sample configurations.  I would appreciate if you would answer a few 
 technical questions for me.

 1) How exactly is the staging_directory in server.ini.sample used?  Is 
 that intended to be the (final) location at which to put files on the remote 
 server?  How is the relative path structure under 
 $GALAXY_ROOT/databases/files handled?

Depending on the configuration, either the LWR client or the LWR will
copy/transfer files out of $GALAXY_ROOT/database/files into
${staging_directory}/${job_identifier}.In your case
$GALAXY_ROOT/database/files will not be mounted on the large compute
cluster but staging_directory should be.  Here, job_identifier can be
either the Galaxy job id or a UUID if you want to allow multiple
Galaxy instances to submit to the same LWR (see assign_uuid option in
server.ini.sample).


 2) What exactly does persistence_directory in server.ini.sample mean?  
 Where should it be located, how will it be used?

The LWR doesn't talk to a real database, it just uses a directory to
persist various internal mappings that should persist beyond an LWR
restart. You shouldn't need to modify this unless the
$LWR_ROOT/persisted_data is not writable to the LWR user (in your case
the LWR user should likely be the Galaxy user). I have filled out the
documentation in server.ini.sample to reflect this.


 3) What exactly does file_cache_dir in server.ini.sample mean?

It is an experimental feature - it may help you down the road to cache
large files on the file system available to the whole cluster on that
file system so they only need to be transferred out of
$GALAXY_ROOT/database/files once. This option is not used unless it is
specified however, so I would try to get it working in a simpler
(though still very complicated :) ) configuration first. I have filled
out the documentation in server.ini.sample to reflect this.


 4) Does LWR preserve some relative path (e.g. to GALAXY_ROOT) under the above 
 directories?

No.


 5) Are files renamed when cached?  If so, are they eventually restored to 
 their original names?

They are put in new directory structures (e.g.
${staging_directory}/${job_id}/{inputs,outputs}), but I believe they
should have the same names. They do eventually get plopped back into
$GALAXY_ROOT/database/files. I have never used the LWR with an object
store - so none of this might work. Hopefully, if there is a fix
needed it will be easy.


 6) Is it possible to customize the DRMAA and/or qsub requests made by LWR, 
 for example to include additional settings such as Project or a memory limit? 
  Is it possible to customize this on a case by case basis, rather than 
 globally?

Yes, there are lots of possibilities here (probably too many) and
obviously none particularly well documented. There is an LWR way of
doing this, but I the best thing to do is going to be piggy backing on
job_conf.xml (the Galaxy way). You will want to review the
documentation for how to setup job_conf.xml and ponit to it, but once
you set up an LWR destination you can specify a native specification
to pass along to the LWR by adding the following tag:

  param id=submit_native_specification-P bignodes -R y -pe threads 8/param

This will only work if your are targeting a queued_drmaa or
queued_external_drmaa job manager on the LWR side...



 7) Are there any options for the queued_drmaa manager in 
 job_managers.ini.sample which are not listed in that file?

Yes, in particular native_specification can be specified just likely
Galaxy here. I have updated the sample.

However, given your setup (run as real user) you will want the manager
type 'queued_external_drmaa' which runs 

Re: [galaxy-dev] Tool Shed packages for BLAST+ binaries

2013-11-14 Thread Greg Von Kuster
If, by current installation methods you are not referring to Tool Shed 
repository installation methods for installing tool dependencies, then please 
read no further.

For installing tool dependencies along with repositories from the Tool Shed, 
fabric is currently being used.  However, it's been on my list for a while to 
eliminate fabric if possible so that I have more management of the threads and 
process ids during installation.  If fabric / subprocess is a problem here, I 
can attempt to raise the prority of looking at this.

Greg Von Kuster

On Nov 14, 2013, at 12:38 PM, Nate Coraor n...@bx.psu.edu wrote:

 
 This is a bit tricker, since there’s currently no way to specify installation 
 actions to run on a system other than the Galaxy application server.  
 However, if we are saying that bash should be available to tools, I’d think 
 there is no reason to say that it’s not expected to be available to tool 
 installation.  Unfortunately, I believe the current installation methods use 
 subprocesses’ default, which is sh, which is not going to be bash on some 
 systems (on Debian, it’s dash).
 
 —nate
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] wiki contributions: Admin/Config/ProFTPd_with_AD

2013-11-14 Thread Eric Rasche
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'd like to contribute what I've learnt today to this particular page.
As it is locked, I am unsure how to contribute my information, so I'm
posting here in the hopes that someone with rights will update it.
Formatted in (pandoc compatible) Markdown for your ease.




# Configuring ProFTPD with OpenLDAP

I've found a set of working options for using ProFTPD with OpenLDAP
servers (instead of AD).

This configuration file can be modified and placed in
`/etc/proftpd/conf.d/galaxy.conf

Using the /conf.d/ directory, you can allow the ProFTPd to serve both
local users (with PAM authentication) in the main configuration file,
AND galaxy users on another port.

```
VirtualHost xxx.yyy.zzz
RequireValidShell   off
Usergalaxy
Group   galaxy
Umask   137 027
AllowOverwrite  on

# Ensure auth is LDAP
AuthPAM off
AuthOrder   mod_ldap.c

# Serve this VirtualHost on port 4000
Port4000

# LDAP Bind information
LDAPServer  ldaps://xxx.yyy.zzz/??sub
LDAPUsers   ou=People,dc=yyy,dc=zzz  (uid=%u)
LDAPAuthBinds   on

# Force those numbers even if LDAP finds a valid UID/GID
LDAPDefaultUID  1003
LDAPDefaultGID  1003
LDAPForceDefaultUID on
LDAPForceDefaultGID on

# Please generate home dir with user/group rwx permissions.
Could probably be stricter
CreateHome  on 770
LDAPGenerateHomedir on 770

# Force this homedir even if LDAP said something different
LDAPForceGeneratedHomedir   on
LDAPGenerateHomedirPrefix
/home/galaxy/galaxy/database/ftp/%u...@cpt.tamu.edu

# The username is already incorporated in the %u, use this or it
will get appended again
LDAPGenerateHomedirPrefixNoUsername on

TransferLog /var/log/proftpd/xfer-galaxy.log

# Cause every FTP user to be jailed (chrooted) into their home
directory
DefaultRoot
/home/galaxy/galaxy/database/ftp/%u...@cpt.tamu.edu
# Allow users to resume interrupted uploads
AllowStoreRestart   on
# I set these as my passive ports because I run a very strict
firewall. Change as needed
PassivePorts49152 5
/VirtualHost
```

Notably, this configuration allows a galaxy virtualhost to coexist with
the normal FTP capabilities provided by ProFTPd, so users can still
access their home directories AND galaxy users can upload to galaxy.
Authentication can of course be changed to suit one's needs.

# TLS Configuration

If you're running the galaxy FTP portion under a VirtualHost, like
described above, you'll notice that TLS directives placed in the main
proftpd.conf file do not apply to VirtualHosts. As such, you can add a
section that looks like this to every VirtualHost that needs to be secured

```
IfModule mod_tls.c
TLSEngine   on
TLSLog  /var/log/proftpd/tls.galaxy.log
# Your cert and private key
TLSRSACertificateFile   /etc/ssl/certs/my.crt
TLSRSACertificateKeyFile/etc/ssl/private/my.key
TLSCACertificateFile/etc/ssl/certs/ca.bundle
# I've found that this is required for FileZilla
TLSOptionsNoCertRequest EnableDiags NoSessionReuseRequired
# Most clients won't be sending certs
TLSVerifyClient off
TLSRequired on
/IfModule
```







Cheers,
Eric

- -- 
Eric Rasche
Programmer II
Center for Phage Technology
Texas AM University
College Station, TX 77843
404-692-2048
e...@tamu.edu
rasche.e...@yandex.ru
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJShTqNAAoJEMqDXdrsMcpVZh8QAL1PvZTtTco+hBeJ+2o9jCyp
DpasNtMm0PTKjmBR7Cq5lxNJeGlAcAJmafGKxnf7EEGPhJnw8xWUDwGolmjJmzik
o9kl/4vASKQPz6+SX7zqz5Fn2155FrSfgZXruSc3N/56UR6N1mbUdJ0fOdm83vfi
hlmVOErujrCkx5S8zSALf7UgTfDT3aPsCyLmy6wy+keNUhpDp5jY2Kvzfm133PIM
YIxKM93rPA+IZb99h2BHRNOQjGIcIIM5cWhQ+NSd1lrRmSKZHFvvfVRvKbjb7uxL
A+JJ86A3QEsfJm9Krch55KKYpWoom3l53xw+EMLBsO6Surerc6hZcsZsEhPaK/sq
GiM33nGZ7DUulJE3OW3lKgilSZY07d3C7ol1fPhovsI20XN3ESdaHAliOSQdT4hn
VqomH8qw8rWxKR1omP6MGfvWw1Sg8d8NylvyehylTOwLHO1iRGKT/HmzqEJSEVzb
TReA9r85d35tIRlnuuNcHPIdAQreH1fp4Pz1F3sCzn3at9Y2WHNvc9ySHaZXMo6M
/KvfdUFGQlDMtWIE3moK1mz5/IsIgDQiZm6Jc+hTcOTXueZ1RTIynLD4n6BHih6r
UrdCdHdwIb5WGLyQbO+scn5YybmYSLtbcc5UBS1PvgdQr61/QA9J0XI8SeRUrSX+
gNFhUh3T5bfrnA0eXnaq
=GZx9
-END PGP SIGNATURE-
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other 

[galaxy-dev] Setting a queue for torque submission

2013-11-14 Thread Jennifer Jackson
Hello, I am going to move your question over to the 
galaxt-...@bx.psu.edu mailing list to give it better visibility to the 
development/local install community. 
http://wiki.galaxyproject.org/MailingLists#The_lists
I didn't find it over here already, but apologies if double posted. 
Also, since I don't know the best answer, will let others jump in!

Best,
Jen
Galaxy team


On 11/14/13 6:35 AM, Xian Su wrote:

Greetings

I'm having a hard time getting job submission to work via torque. 
Posted below is the relevant part of my universe file and job_conf.xml.


The main cause comes from our torque server requiring a queue to be 
requested. I can submit jobs just fine via job .sub files via command 
line using qsub -q M30


Running jobs via webgui results in the following:
galaxy.jobs.runners.pbs WARNING 2013-11-14 13:54:23,722 (28) 
pbs_submit failed (try 5/5), PBS error 15039: Route rejected by all 
destinations
galaxy.jobs.runners.pbs ERROR 2013-11-14 13:54:25,724 (28) All 
attempts to submit job failed


On the torque server's pbs logs, I see the corresponding
11/14/2013 13:54:23;0080;PBS_Server.66159;Req;req_reject;Reject reply 
code=15039(No default queue specified MSG=requested queue not found), 
aux=0, type=QueueJob, from xians@podmt1-100-62.novalocal


In my universe_wsgi.ini, I'm currently using the settings:
start_job_runners = pbs
default_cluster_job_runner = pbs:///-q M30 -l nodes=1:ppn=12/
^I mainly have no idea how to find the correct syntax for this pbs:///

But when I comment out default_cluster_job_runner and instead 
uncomment the line for the following job conf, I get the same error:

?xml version=1.0?
job_conf
plugins workers=4
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner/
plugin id=pbs type=runner 
load=galaxy.jobs.runners.pbs:PBSJobRunner workers=2/

/plugins
handlers default=handlers
handler id=main tags=handlers/
/handlers
destinations default=pbs
destination id=local runner=local/
destination id=pbs runner=pbs tags=mycluster
param 
id=Resource_Listwalltime=72:00:00,nodes=1:ppn=12/param

param id=-qM30/param
/destination
/destinations
tools
tool id=main handler=main destination=pbs/
/tools
limits
/limits
/job_conf

Thank you in advance for any help.

Regards,
Xian


___
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using reply all in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

   http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:

   http://galaxyproject.org/search/mailinglists/


--
Jennifer Hillman-Jackson
http://galaxyproject.org

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Errors running DRMAA and PBS on remote server running Torque 4

2013-11-14 Thread Moskalenko,Oleksandr
Carrie,

This turned out to be an issue with Torque 4 changing from pbs_submit() to 
submit_pbs_hash() procedure while both pbs-drmaa and pbs-python were still 
using pbs_submit(). The maintainer of the pbs-drmaa library 
(http://sourceforge.net/p/pbspro-drmaa/wiki/Home/ ) Mariusz Mamonski provided 
us with a fix for pbs-drmaa-1.0.15 today. If you're experiencing the issue 
frequently I'd be happy to share the fixed library. Otherwise, I think Mariusz 
will probably provide a new pbs-drmaa release that incorporates the fix soon.

Regards,

Alex

From: galaxy-dev-boun...@lists.bx.psu.edu 
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Moskalenko,Oleksandr
Sent: Monday, November 04, 2013 9:47 AM
To: Ganote, Carrie L; galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Errors running DRMAA and PBS on remote server running 
Torque 4

Hi Carrie,

It is a bug in Torque/4.x series. It can be fixed for a time by restarting the 
Torque pbs_server process, but it's going to come back. It's not 
galaxy-specific as any python-drmaa request will fail once Torque starts 
experiencing the issue.

Regards,

Alex

From: Ganote, Carrie L cgan...@iu.edumailto:cgan...@iu.edu
Date: Tuesday, October 15, 2013 at 4:58 PM
To: galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] Errors running DRMAA and PBS on remote server running 
Torque 4

Hi List,

I've sprouted some grays in the last week after my Galaxy instances all 
simultaneously ceased to submit jobs to our main cluster.

Some Galaxy instances are running the PBS job runner, and others use DRMAA. For 
the DRMAA runner I was getting:
galaxy.jobs.runners ERROR 2013-10-15 08:40:14,942 (1024) Unhandled exception 
calling queue_job
Traceback (most recent call last):
  File galaxy-dist/lib/galaxy/jobs/runners/__init__.py, line 60, in run_next
method(arg)
  File galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line 188, in queue_job
external_job_id = self.ds.runJob(jt)
  File build/bdist.linux-x86_64/egg/drmaa/__init__.py, line 331, in runJob
_h.c(_w.drmaa_run_job, jid, _ct.sizeof(jid), jobTemplate)
  File build/bdist.linux-x86_64/egg/drmaa/helpers.py, line 213, in c
return f(*(args + (error_buffer, sizeof(error_buffer
  File build/bdist.linux-x86_64/egg/drmaa/errors.py, line 90, in error_check
raise _ERRORS[code-1](code %s: %s % (code, error_buffer.value))
InternalException: code 1: (qsub) cannot access script file: Unauthorized 
Request  MSG=can not authorize request  (0-Success)

And in my PBS runner:
galaxy.jobs.runners.pbs WARNING 2013-10-14 17:13:07,319 (550) pbs_submit failed 
(try 1/5), PBS error 15044: Resources temporarily unavailable

To give some background, I had recently requested a new virtual machine to put 
my test/dev Galaxy on. I copied our production Galaxy to this new VM. I secured 
a new domain name for it and set it running. Everything was going well until I 
tried to hook it up to the cluster; at first I got an error saying that I 
didn't have permission to submit jobs. Makes sense, the new VM was not a 
qualified submit host for the cluster. I asked the sysadmins to add the VM as a 
submit host to the cluster using qmgr. As soon as this was done, not only could 
I still not submit jobs from the test Galaxy, but no Galaxy was able to submit 
jobs to the cluster.

The issue isn't with Galaxy here but the underlying calls that it makes - for 
drmaa, I tracked it back to pbs-drmaa/bin/drmaa-run. For PBS, I'm sure it's 
somewhere in with libtorque. In every case, I could call qsub from the command 
line and it would correctly submit jobs, which was more perplexing.

I re-installed python, drmaa.egg, pbs-drmaa, and rebooted the VM. I of course 
restarted Galaxy with each step, to no avail. I worked with the admins to see 
what was happening in the server logs, but the same cryptic error showed up - 
cannot authorize request. I've had this issue before in the past, more or less, 
but usually just gave up on it. It seemed to come and go sporadically, but 
rebooting the clusters seemed to help.

This time, with our production server no longer functioning, I begged for help 
and the admins looked through the pbs_server config but couldn't find any 
mistypes or problems. Reloading the config by sending hangup signals to 
pbs_server didn't help. Then we tried pausing the scheduler and restarting 
pbs_server completely - and eureka, all problems went away. PBS and DRMAA 
runners are back up and working fine. This really seems to be a bug in Torque 
4.1.5.1.

I hope this saves someone a lot of headache! Newer versions of Torque may be 
the answer. I would also advise against making changes to the pbs_server 
configuration while in production - we have monthly maintenance, and I don't 
think I'll ever request changes when there won't be an immediate reboot to 
flush the server!

Cheers,

Carrie

[galaxy-dev] Failed to generate job destination

2013-11-14 Thread Björn Grüning
Hi, 

anyone has seen the following bug in Galaxy November release?

galaxy.jobs.handler ERROR 2013-11-15 00:07:41,008 Failed to generate job
destination
Traceback (most recent call last):
  File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line
311, in __check_if_ready_to_run
self.job_wrappers[job.id].job_destination
  File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line
620, in job_destination
return self.job_runner_mapper.get_job_destination(self.params)
  File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
163, in get_job_destination
self.__cache_job_destination( params )
  File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
148, in __cache_job_destination
raw_job_destination =
self.job_wrapper.tool.get_job_destination( params )
AttributeError: 'NoneType' object has no attribute 'get_job_destination'


Thanks,
Bjoern

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Failed to generate job destination

2013-11-14 Thread John Chilton
Hey Bjoern,

  Can you post your job_conf.xml or send it to galaxy-b...@bx.psu.edu
if it has something sensitive in it, alternatively your runner urls if
you are still using the older format for specifying these things? Is
this for every job or just certain tools?

-John

On Thu, Nov 14, 2013 at 5:14 PM, Björn Grüning
bjoern.gruen...@pharmazie.uni-freiburg.de wrote:
 Hi,

 anyone has seen the following bug in Galaxy November release?

 galaxy.jobs.handler ERROR 2013-11-15 00:07:41,008 Failed to generate job
 destination
 Traceback (most recent call last):
   File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line
 311, in __check_if_ready_to_run
 self.job_wrappers[job.id].job_destination
   File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line
 620, in job_destination
 return self.job_runner_mapper.get_job_destination(self.params)
   File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
 163, in get_job_destination
 self.__cache_job_destination( params )
   File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
 148, in __cache_job_destination
 raw_job_destination =
 self.job_wrapper.tool.get_job_destination( params )
 AttributeError: 'NoneType' object has no attribute 'get_job_destination'


 Thanks,
 Bjoern

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Failed to generate job destination

2013-11-14 Thread Björn Grüning
Hi John,

please find attached my jobs_conf.xml file. I was not able to find any
systematic in that error. It seems to happen not so often.

Thanks,
Bjoern

 Hey Bjoern,
 
   Can you post your job_conf.xml or send it to galaxy-b...@bx.psu.edu
 if it has something sensitive in it, alternatively your runner urls if
 you are still using the older format for specifying these things? Is
 this for every job or just certain tools?
 
 -John
 
 On Thu, Nov 14, 2013 at 5:14 PM, Björn Grüning
 bjoern.gruen...@pharmazie.uni-freiburg.de wrote:
  Hi,
 
  anyone has seen the following bug in Galaxy November release?
 
  galaxy.jobs.handler ERROR 2013-11-15 00:07:41,008 Failed to generate job
  destination
  Traceback (most recent call last):
File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line
  311, in __check_if_ready_to_run
  self.job_wrappers[job.id].job_destination
File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line
  620, in job_destination
  return self.job_runner_mapper.get_job_destination(self.params)
File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
  163, in get_job_destination
  self.__cache_job_destination( params )
File /usr/local/galaxy/galaxy-dist/lib/galaxy/jobs/mapper.py, line
  148, in __cache_job_destination
  raw_job_destination =
  self.job_wrapper.tool.get_job_destination( params )
  AttributeError: 'NoneType' object has no attribute 'get_job_destination'
 
 
  Thanks,
  Bjoern
 
  ___
  Please keep all replies on the list by using reply all
  in your mail client.  To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
 
  To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/




job_conf.xml
Description: XML document
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/