[galaxy-dev] galaxy on ubuntu 14.04: hangs on metadata cleanup

2014-05-07 Thread Jorrit Boekel
Dear all,

Has anyone tried running Galaxy on Ubuntu 14.04?

I’m trying a test setup on two virtual machines (worker+master) with a SLURM 
queue. Getting in strange problems when jobs finish, the master hangs, 
completely unresponsive with CPU at 100% (as reported by virt-manager, not by 
top). Only drmaa jobs seem to be affected. After hanging, a reboot shows the 
job is finished (and green in history).

It took me some debugging to figure out where things go wrong, but it seems it 
goes wrong when os.remove is called in lib/galaxy/datatypes/metadata.py in 
method cleanup_external_metadata. I can reproduce the problem by calling 
os.remove(metadatafile) by hand (in an interactive python shell) when using pdb 
to create a breakpoint just before the call. If I comment out the os.remove it 
runs on until it hits another delete call in lib/galaxy/jobs/__init__.py:
self.app.object_store.delete(self.get_job(), base_dir='job_work', 
entire_dir=True, dir_only=True, extra_dir=str(self.job_id))
It’s in the JobWrapper class in the cleanup() method. I should mention here 
that my galaxy version is a bit old since I’m running my own fork with local 
modifications on datatypes.

This object_store.delete also leads to a shutil.rmtree and os.remove function. 
So, remove calls to the filesystem seem to hang the whole thing, but only at 
this point in time. Rebooting and removing by hand is no problem, pdb-stepping 
also sometimes fixes it (but if I just press continue it hangs). I don’t know 
where to go from here with debugging, but has anyone seen anything similar? 
Right now it feels like it may be caused by timing rather than actual code 
problems.

cheers,
— 
Jorrit Boekel
Proteomics systems developer
BILS / Lehtiö lab
Scilifelab Stockholm, Sweden




___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Installation failure on Test Tool Shed

2014-05-07 Thread Peter Cock
Hi Dave,

Can you tell me any more about this failed tool dependency error:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira4_assembler

Installation errors
Tool dependencies
TypeNameVersion
MIRA package 4.0
Error
File 
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/install_util.py,
line 345, in install_and_build_package_via_fabric tool_dependency =
fabric_util.install_and_build_package( app, tool_shed_repository,
tool_dependency, actions_dict ) File
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/fabric_util.py,
line 93, in install_and_build_package initial_download=False ) File
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/recipe/recipe_manager.py,
line 418, in execute_step initial_download=initial_download ) File
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py,
line 1312, in execute_step return_output=False ) File
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/recipe/recipe_manager.py,
line 247, in handle_command output = self.handle_complex_command(
command ) File 
/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/lib/tool_shed/galaxy_install/tool_dependencies/recipe/recipe_manager.py,
line 287, in handle_complex_command cwd=state.env[ 'lcwd' ] ) File
/usr/lib/python2.7/subprocess.py, line 711, in __init__ errread,
errwrite) File /usr/lib/python2.7/subprocess.py, line 1308, in
_execute_child raise child_exception [Errno 2] No such file or
directory: 
'/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/install_and_test_tool_shed_repositories/repositories_with_tools/tmp/tmpxFCguW/tmp-toolshed-mtdt0b857/MIRA'

I presume this was a Python stack trace but the whitespace
has been lost when rendered as HTML. However, I don't quite
see what is going wrong here...

Thanks,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] galaxy on ubuntu 14.04: hangs on metadata cleanup

2014-05-07 Thread Jorrit Boekel
I should probably mention that the data filesystem is NFS, exported by the 
master from /mnt/galaxy/data and mounted on the worker. No separate fileserver. 
Master is the one that hangs.


cheers,
— 
Jorrit Boekel
Proteomics systems developer
BILS / Lehtiö lab
Scilifelab Stockholm, Sweden



On 07 May 2014, at 15:57, Jorrit Boekel jorrit.boe...@scilifelab.se wrote:

 Dear all,
 
 Has anyone tried running Galaxy on Ubuntu 14.04?
 
 I’m trying a test setup on two virtual machines (worker+master) with a SLURM 
 queue. Getting in strange problems when jobs finish, the master hangs, 
 completely unresponsive with CPU at 100% (as reported by virt-manager, not by 
 top). Only drmaa jobs seem to be affected. After hanging, a reboot shows the 
 job is finished (and green in history).
 
 It took me some debugging to figure out where things go wrong, but it seems 
 it goes wrong when os.remove is called in lib/galaxy/datatypes/metadata.py in 
 method cleanup_external_metadata. I can reproduce the problem by calling 
 os.remove(metadatafile) by hand (in an interactive python shell) when using 
 pdb to create a breakpoint just before the call. If I comment out the 
 os.remove it runs on until it hits another delete call in 
 lib/galaxy/jobs/__init__.py:
 self.app.object_store.delete(self.get_job(), base_dir='job_work', 
 entire_dir=True, dir_only=True, extra_dir=str(self.job_id))
 It’s in the JobWrapper class in the cleanup() method. I should mention here 
 that my galaxy version is a bit old since I’m running my own fork with local 
 modifications on datatypes.
 
 This object_store.delete also leads to a shutil.rmtree and os.remove 
 function. So, remove calls to the filesystem seem to hang the whole thing, 
 but only at this point in time. Rebooting and removing by hand is no problem, 
 pdb-stepping also sometimes fixes it (but if I just press continue it hangs). 
 I don’t know where to go from here with debugging, but has anyone seen 
 anything similar? Right now it feels like it may be caused by timing rather 
 than actual code problems.
 
 cheers,
 — 
 Jorrit Boekel
 Proteomics systems developer
 BILS / Lehtiö lab
 Scilifelab Stockholm, Sweden
 
 
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Correct toolshed tool config for a .loc file

2014-05-07 Thread Dooley, Damion
I'll try to keep this short!  I'm wondering why on a fresh install of galaxy 
via a setup script into a python virtualenv, my custom toolshed tool's .loc 
files don't seem to be loading, i.e. the tool's tool_data_table_conf.xml file 
isn't loading.  All the other parts of the tool (and other toolshed tools) do 
run ok.  On the galaxy install where the tool was developed, it runs ok too, 
and I can't see any universe.wsgi.conf differences that would account for this 
though I'm sure the dev server has a number of other different config settings.

I thought I knew my way around this .loc stuff, but I'm flummoxed.  It is as 
though /usr/local/galaxy/test/galaxy-dist/tool-data/[toolshed domain etc 
etc]/00d397a381f8/tool_data_table_conf.xml isn't loading into Galaxy's tabular 
data list?!

The only other thing I can think of is that I stuffed the tool-data related loc 
files into a subfolder of the tool

   blast_reporting.py
   blast_reporting.xml
   tool_data_table_conf.xml.sample
   ...
   tool-data/
  blast_reporting_fields.loc.sample

But the resulting downloaded tool config in the galaxy-dist/tool-data/. 
/blast_reporting_fields.loc and tool_data_table_conf.xml files seem just fine!!!

Regards,

Damion
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] setup_virtualenv action doesn't update PATH

2014-05-07 Thread Rodrigo Garcia

Hello List,

I'm writing a tool_dependencies.xml file for a python package.
It defines an action type=setup_virtualenv, and when I install it to 
my galaxy from the test shed it is actually installed into 
~/environments, and the tool definition is set up correctly.


This package includes a callable script, when I install the package on a 
virtualenv using pip, the path to the CLI tool is inserted to the PATH 
and I can call it anywhere.


But when I run the tool from galaxy it cannot find the script. Does the 
setup_virtualenv action also update the PATH?


Thanks!

Rodrigo
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Correct toolshed tool config for a .loc file

2014-05-07 Thread Dooley, Damion
One further piece of the mystery I see now:

The View Data tables Registry shows my tool's loc files, but has odd stuff 
for Tool data path and Missing index file columns, for example:

   blast_reporting_fields [tab] 
/usr/local/galaxy/test/galaxy-dist/tool-data/salk.bccdc.med.ubc.ca/toolshed/repos/ddooley/blast_reporting/00d397a381f8/blast_reporting_fields.loc
 [tab] /usr/local/galaxy/test/galaxy-dist/tool-data

   ./database/tmp/tmp-toolshed-gmfcrDITxyK/blast_reporting_fields.loc [tab] 
./database/tmp/tmp-toolshed-gmfcrDITxyK [tab] missing

There is no galaxy-dist/database/tmp-toolshed-gmfcrDITxyK folder, if that's 
where its supposed to be? Is there some setting for database tmp folder cache 
for tools?

Thanks for tips ...

Damion
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] FW: Correct toolshed tool config for a .loc file - addendum

2014-05-07 Thread Dooley, Damion
So I see this now in main.log:

galaxy.tools.data WARNING 2014-05-07 16:40:29,613 Line 3 in tool data table 
'blast_reporting_fields' is invalid (HINT: 'TAB' characters must be used to 
separate fields):
length  numeric int 1   1   1   
Alignment length

The thing is, tab characters ARE separating the fields.  Its just that some 
columns don't have values.  Now, this was working fine on the other galaxy 
install, updated to December 2013 changeset 11247 .  So has .loc processing 
changed since then?

Damion
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] FW: Correct toolshed tool config for a .loc file - RESOLVED

2014-05-07 Thread Dooley, Damion
Well Galaxians, I have to eat humble pie, kind of.  Mismatches between the 
tool's tool_data_table_conf.xml and the actual tabular data were responsible 
for the disappearing act, which log rather indirectly alluded to.  Something 
that didn't show itself on the dev server.  

I'll push for some greater reporting on this though, on the Trello board.  What 
I'd like to see is the View data tables registry report have its Missing 
index file column changed to something like Status and under that you get to 
see Missing index file or Bad file - Parse error, 23 lines etc.

So my apologies to anyone who put some time into this.

Damion
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/