Re: [galaxy-dev] trackster is not working on the vrelease_2014.02.10--2--29ce93a13ac7

2014-04-15 Thread Liisa Koski
Thanks Sajoshca, that did help, along with replacing datatypes_conf.xml 
with datatypes_conf.xml.sample and tool_conf.xml with 
tool_conf.xml.samples (some of the migrated tools were not removed from 
this file).
All in all...it's working now.
Best,
Liisa





From:   Sajoscha Sauer 
To: Liisa Koski 
Cc: "galaxy-dev@lists.bx.psu.edu Dev" 
Date:   11/04/2014 03:12 AM
Subject:Re: [galaxy-dev] trackster is not working on the 
vrelease_2014.02.10--2--29ce93a13ac7



Hi Liisa, 

For us, it was only copying the directory 
?galaxy-dist/static/scripts/packed/viz? from a fresh install to the 
installation we wanted to update. 

I hope that helps! 

Cheers, 
Sajoscha 

On Apr 10, 2014, at 5:57 PM, Liisa Koski  wrote:

Hello, 
I just updated to Feb.10 and also noticed that Trackster is not working. I 
too get the blank screen. Could you please tell me how to manually update 
the vis packed files? I do not know what these are. 

Thanks in advance, 
Liisa 

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] trackster is not working on the vrelease_2014.02.10--2--29ce93a13ac7

2014-04-10 Thread Liisa Koski
Hello,
I just updated to Feb.10 and also noticed that Trackster is not working. I 
too get the blank screen. Could you please tell me how to manually update 
the vis packed files? I do not know what these are.

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] example_watch_folder.py problem importing file

2014-04-10 Thread Liisa Koski
Hello,
I am testing the example_watch_folder.py script on my local instance of 
Galaxy (Feb.10 distribution). I have set up a simple workflow with the 
Fasta-to-Tabular tool. It takes a single input fasta file. Works in Galaxy 
UI.

I have created input and output folders and followed the steps here 
http://gmod.827538.n3.nabble.com/Trouble-Shooting-example-watch-folder-py-td4030355.html

But when I run the script and put a simple fasta file in the input folder 
I get the following error in the logs:

galaxy.jobs.runners.drmaa DEBUG 2014-04-10 11:11:03,127 (1899) command is: 
python galaxy_dist_dev/tools/data_source/upload.py galaxy_dist_dev 
galaxy_dist_dev/database/tmp/tmpK40iV3galaxy_dist_dev/database/tmp/tmpx_5QyU 
 
3802:galaxy_dist_dev/database/job_working_directory/001/1899/dataset_3802_files:galaxy_dist_dev/database/files/003/dataset_3802.dat;
 
return_code=$?; cd galaxy_dist_dev; galaxy_dist_dev/set_metadata.sh 
./database/files galaxy_dist_dev/database/job_working_directory/001/1899 . 
galaxy_dist_dev/universe_wsgi.ini galaxy_dist_dev/database/tmp/tmpK40iV3 
galaxy_dist_dev/database/job_working_directory/001/1899/galaxy.json; sh -c 
"exit $return_code"
galaxy.jobs.runners.drmaa DEBUG 2014-04-10 11:11:03,127 (1899) native 
specification is: -V -q all.q -l hostname="hostname.ca"
galaxy.jobs.runners.drmaa INFO 2014-04-10 11:11:03,133 (1899) queued as 
678390
galaxy.jobs DEBUG 2014-04-10 11:11:03,180 (1899) Persisting job 
destination (destination id: name_hwew)
galaxy.jobs.runners.drmaa DEBUG 2014-04-10 11:11:03,579 (1899/678390) 
state change: job is running
10.1.1.111 - - [10/Apr/2014:11:11:07 -0400] "POST 
/api/workflows?key=dd3916dfb37dffc08f070f1e3503015a HTTP/1.1" 200 - "-" 
"Python-urllib/2.6"
galaxy.jobs.runners.drmaa DEBUG 2014-04-10 11:11:07,778 (1899/678390) 
state change: job finished normally
galaxy.jobs DEBUG 2014-04-10 11:11:08,082 setting dataset state to ERROR
galaxy.jobs DEBUG 2014-04-10 11:11:08,227 job 1899 ended
galaxy.datatypes.metadata DEBUG 2014-04-10 11:11:08,227 Cleaning up 
external metadata files
galaxy.jobs DEBUG 2014-04-10 11:11:08,652 (1900) Working directory for job 
is: galaxy_dist_dev/database/job_working_directory/001/1900
galaxy.datatypes.metadata DEBUG 2014-04-10 11:11:08,770 Cleaning up 
external metadata files
galaxy.jobs.handler INFO 2014-04-10 11:11:08,795 (1900) Job unable to run: 
one or more inputs in error state
10.202.22.186 - - [10/Apr/2014:11:11:31 -0400] "GET /history/list 
HTTP/1.1" 200 - "http://galaxy.server.ca:8080/root"; "Mozilla/5.0 (Windows 
NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0"


And in the dataset itself in the resulting Galaxy history UI

WARNING:galaxy.datatypes.registry:Overriding conflicting datatype with 
extension 'asn1', using datatype from 
galaxy_dist_dev/database/tmp/tmpK40iV3.



Yesterday, before I did the upgrade to Feb.10. The input files would 
import (as a green dataset in the History UI) put be empty and say 'no 
peak'


Thanks in advance for any help,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Empty history pane

2014-03-14 Thread Liisa Koski
Our use was using the lastest Firefox release. I asked him to try FireFox 
Portable and that seemed to do the trick. It was a problem with his 
Firefox install.

Thanks for you help,
Liisa




From:   Dannon Baker 
To: Liisa Koski 
Cc: Galaxy Dev 
Date:   13/03/2014 06:39 PM
Subject:Re: [galaxy-dev] Empty history pane



If it's possible, can you check (or ask the user to check) if there are 
any javascript errors if you open the browser console when experiencing 
this failure?


On Thu, Mar 13, 2014 at 2:07 PM, Liisa Koski  wrote:
Hello, 
Our site maintains a local Galaxy installation (Nov.4th distributuion). 
One of our usrers has lost the ability to view anything in his history 
pane. He is able to run tools and view the list of 'Saved Histories' in 
the middle pane but the History pane itself stays empty. When I 
impersonate him I can see his history pane. Very weird. Any insight would 
be much appreciated. 

Thanks, 
Liisa
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Empty history pane

2014-03-14 Thread Liisa Koski
Thanks Dannon, 

I set java to open the console (I assume that's what you are referring to) 
but when going to galaxy it doesn't open. The console does open when i go 
to java.com and verify my version.  i've also just removed and reinstalled 
java and rebooted machine.  still no history pane :(






From:   Dannon Baker 
To: Liisa Koski 
Cc: Galaxy Dev 
Date:   13/03/2014 06:39 PM
Subject:Re: [galaxy-dev] Empty history pane



If it's possible, can you check (or ask the user to check) if there are 
any javascript errors if you open the browser console when experiencing 
this failure?


On Thu, Mar 13, 2014 at 2:07 PM, Liisa Koski  wrote:
Hello, 
Our site maintains a local Galaxy installation (Nov.4th distributuion). 
One of our usrers has lost the ability to view anything in his history 
pane. He is able to run tools and view the list of 'Saved Histories' in 
the middle pane but the History pane itself stays empty. When I 
impersonate him I can see his history pane. Very weird. Any insight would 
be much appreciated. 

Thanks, 
Liisa
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Empty history pane

2014-03-13 Thread Liisa Koski
Hello,
Our site maintains a local Galaxy installation (Nov.4th distributuion). 
One of our usrers has lost the ability to view anything in his history 
pane. He is able to run tools and view the list of 'Saved Histories' in 
the middle pane but the History pane itself stays empty. When I 
impersonate him I can see his history pane. Very weird. Any insight would 
be much appreciated.

Thanks,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Galaxy Reports link - registed users per month - does not work

2014-01-17 Thread Liisa Koski
Hello,
First..I love the Galaxy Reports tool...thanks so much for this.

I did however come across an error when trying to few the Registered Users 
per month link. It would be really nice to have this working :)

Thanks in advance,
Liisa


URL: http://domain:9001/users/registered_users_per_month
Module paste.exceptions.errormiddleware:144 in __call__
>>  app_iter = self.application(environ, sr_checker)
Module paste.debug.prints:106 in __call__
>>  environ, self.app)
Module paste.wsgilib:543 in intercept_output
>>  app_iter = application(environ, replacement_start_response)
Module paste.lint:170 in lint_app
>>  iterator = application(environ, start_response_wrapper)
Module paste.recursive:84 in __call__
>>  return self.application(environ, start_response)
Module paste.httpexceptions:633 in __call__
>>  return self.application(environ, start_response)
Module galaxy.web.framework.base:132 in __call__
>>  return self.handle_request( environ, start_response )
Module galaxy.web.framework.base:190 in handle_request
>>  body = method( trans, **kwargs )
Module galaxy.webapps.reports.controllers.users:29 in 
registered_users_per_month
>>  for row in q.execute():
Module sqlalchemy.sql.expression:2841 in execute
Module sqlalchemy.engine.base:2453 in _execute_clauseelement
Module sqlalchemy.engine.base:1584 in _execute_clauseelement
Module sqlalchemy.engine.base:1698 in _execute_context
Module sqlalchemy.engine.base:1691 in _execute_context
Module sqlalchemy.engine.default:331 in do_execute
Module MySQLdb.cursors:173 in execute
Module MySQLdb.connections:36 in defaulterrorhandler
OperationalError: (OperationalError) (1305, 'FUNCTION 
galaxy_production.date_trunc does not exist') 'SELECT date_trunc(%s, 
date(galaxy_user.create_time)) AS date, count(galaxy_user.id) AS num_users 
\nFROM galaxy_user GROUP BY date_trunc(%s, date(galaxy_user.create_time)) 
ORDER BY date DESC' ('month', 'month') 

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Visualizations broken after Aug. 12 update

2013-10-10 Thread Liisa Koski
Thank you Jeremy,

It turned out the issue was our version of bedtools. We updated to 2.17.0 
and no longer have the errors.

Cheers,
Liisa





From:   Jeremy Goecks 
To: Liisa Koski 
Cc: "galaxy-dev@lists.bx.psu.edu Galaxy-dev" 

Date:   05/10/2013 08:08 PM
Subject:Re: [galaxy-dev] Visualizations broken after Aug. 12 
update



You'll need to do two things:

(1) Install wigToBigWig and bedtools; see steps 2 and 3 here: 
http://wiki.galaxyproject.org/Visualization%20Setup

(2) Update your datatypes_conf.xml file:

If you haven't made changes to your datatypes_conf.xml file, you can just 
copy datatypes_conf.xml.sample to datatypes_conf.xml to get the needed 
converters. 
 
If you've made changes to datatypes_conf.xml, you'll need to manually add 
the needed converters. We recently transitioned all the *_to_summary_tree 
converters to *_to_bigwig, so you'll want to remove the summary_tree 
converters and replace them with the bigwig converters.

Let us know if you have any problems/questions.

Best,
J.


On Oct 3, 2013, at 1:33 PM, Liisa Koski wrote:

Hello, 
We recently updated to the Aug.12 distribution. I u commented the 
visualizations_config_directory = config/visualizations in the universe 
file and when I restarted and tried to view past Trackster Visualizations 
none of my tracks load. I get the error displayed in the track itself: 

Cannot display dataset due to an error. View error. Try again. 

When I click on View error I get pop up with this for my gff3 track: 

Input error: Cannot split into blocks. Found interval with fewer than 12 
columns.
needLargeMem: trying to allocate 0 bytes (limit: 17179869184) 

With this for my BAM track: 
grep: /Galaxy/galaxy_dist/database/files/016/dataset_16392.dat: No such 
file or directory
needLargeMem: trying to allocate 0 bytes (limit: 17179869184) 


All tracks in all published Visualizations are no longer loading. 
I do not see any errors in the log files. 

I have also updated my datatypes.conf file by copying over 
datatypes.conf.sample 


Any help would be much appreciated. 

Thanks in advance, 
Liisa 

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Visualizations broken after Aug. 12 update

2013-10-03 Thread Liisa Koski
Hello,
We recently updated to the Aug.12 distribution. I u commented the 
visualizations_config_directory = config/visualizations in the universe 
file and when I restarted and tried to view past Trackster Visualizations 
none of my tracks load. I get the error displayed in the track itself:

Cannot display dataset due to an error. View error. Try again.

When I click on View error I get pop up with this for my gff3 track:

Input error: Cannot split into blocks. Found interval with fewer than 12 
columns.
needLargeMem: trying to allocate 0 bytes (limit: 17179869184)

With this for my BAM track:
grep: /Galaxy/galaxy_dist/database/files/016/dataset_16392.dat: No such 
file or directory
needLargeMem: trying to allocate 0 bytes (limit: 17179869184)


All tracks in all published Visualizations are no longer loading.
I do not see any errors in the log files. 

I have also updated my datatypes.conf file by copying over 
datatypes.conf.sample


Any help would be much appreciated.

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Can't specify workflow parameters for integer values

2013-02-01 Thread Liisa Koski
Hello,
I created a simple workflow with an input file and the 'Select first' 
tool. I wanted the value to be a parameter ${value}. I am able to save the 
workflow and see the little grey parameters box at the top of the workflow 
editor.
When I try to run this workflow I get a yellow box at the top of the 
window stating the following:

Problems were encountered when loading this workflow, likely due to tool 
version changes. Missing parameter values have been replaced with default. 
Please review the parameter values below. 

I do not however see an error in paster.log

When I reopen the workflow in the editor I get a popup window saying 
'Workflow loaded with changes'

Problems were encountered loading this workflow (possibly a result of tool 
upgrades). Please review the following parameters and then save.
Step 2: Select first
Value no longer valid for 'Select first', replaced with default

I am running two instances of Galaxy (Jan. 11 distributuion) and this 
occurs in both of my instances. Will other workflows as well. But it 
appears to only occur for integer parameters...not text parameters.

Any help would be much appreciated.

Liisa
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] maximum recursion depth exceeded while calling a Python object

2013-01-25 Thread Liisa Koski
Hi,
I'm running a local instance of Galaxy and no matter what tool I run I get 
the following error:

Error executing tool: maximum recursion depth exceeded while calling a 
Python object

In the log file:

galaxy.tools ERROR 2013-01-25 14:06:54,375 Exception caught while 
attempting tool execution:
Traceback (most recent call last):
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 1776, in 
handle_input
_, out_data = self.execute( trans, incoming=params, history=history )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 2103, in 
execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/actions/__init__.py", line 
203, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 'len' 
).file_name
  File "/Galaxy/galaxy_dist/lib/galaxy/model/__init__.py", line 1161, in 
get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, target_ext, 
return_output=True, visible=False, deps=deps, set_output_history=False 
).values()[0]
  File "/Galaxy/galaxy_dist/lib/galaxy/datatypes/data.py", line 467, in 
convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 2103, in 
execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/actions/__init__.py", line 
203, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 'len' 
).file_name
  File "/Galaxy/galaxy_dist/lib/galaxy/model/__init__.py", line 1161, in 
get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, target_ext, 
return_output=True, visible=False, deps=deps, set_output_history=False 
).values()[0]
  File "/Galaxy/galaxy_dist/lib/galaxy/datatypes/data.py", line 467, in 
convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 2103, in 
execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/actions/__init__.py", line 
203, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 'len' 
).file_name
  File "/Galaxy/galaxy_dist/lib/galaxy/model/__init__.py", line 1161, in 
get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, target_ext, 
return_output=True, visible=False, deps=deps, set_output_history=False 
).values()[0]
  File "/Galaxy/galaxy_dist/lib/galaxy/datatypes/data.py", line 467, in 
convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 2103, in 
execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )

thousands of lines here...ending with...

  File 
"/Galaxy/galaxy_dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/schema.py",
 
line 760, in _make_proxy
selectable.columns.add(c)
  File 
"/Galaxy/galaxy_dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/sql/expression.py",
 
line 1668, in add
self[column.key] = column
  File 
"/Galaxy/galaxy_dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/sql/expression.py",
 
line 1671, in __setitem__
if key in self:
  File 
"/Galaxy/galaxy_dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/sql/expression.py",
 
line 1702, in __contains__
return util.OrderedProperties.__contains__(self, other)
  File 
"/Galaxy/galaxy_dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/util.py",
 
line 652, in __contains__
return key in self._data
RuntimeError: maximum recursion depth exceeded while calling a Python 
object


Any help would be much appreciated,
Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Fw: Error when attempting to install new distributuion - TypeError: Invalid argument(s) 'server_side_cursors', 'max_overflow'

2013-01-24 Thread Liisa Koski
Yes that worked! Thanks,
Liisa





From:   Jeremy Goecks 
To: Liisa Koski 
Cc: 
Date:   23/01/2013 11:02 PM
Subject:Re: [galaxy-dev] Fw: Error when attempting to install new 
distributuion - TypeError: Invalid argument(s) 'server_side_cursors', 
'max_overflow'



My guess is that you'll need to comment out some database options in your 
universe_wsgi.ini file that are valid for MySQL but not for SQLite. 
Specifically, database_engine_option_max_overflow and perhaps other 
options as well.

Best,
J.

On Jan 23, 2013, at 1:54 PM, Liisa Koski wrote:

Hello, 
I am attempting to install a new galaxy-dist and am running into the 
following error. I first tried it by using a mysql database but this error 
also occurs when I use the default SQLite database. 
Any help would be much appreciated. 
Thanks in advance, 
Liisa 

[galaxy-dev]$ ./run.sh 
Some eggs are out of date, attempting to fetch... 
Fetched 
http://eggs.galaxyproject.org/pysqlite/pysqlite-2.5.6_3.6.17_static-py2.6-linux-x86_64-ucs4.egg
 

Fetch successful. 
galaxy-dev/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1:
 
RuntimeWarning: __builtin__.file size changed, may indicate binary 
incompatibility 
  from csamtools import * 
python path is: galaxy-dev/eggs/numpy-1.6.0-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/boto-2.5.2-py2.6.egg, 
galaxy-dev/eggs/mercurial-2.2.3-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/Fabric-1.4.2-py2.6.egg, 
galaxy-dev/eggs/ssh-1.7.14-py2.6.egg, 
galaxy-dev/eggs/Whoosh-0.3.18-py2.6.egg, 
galaxy-dev/eggs/pycrypto-2.5-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/python_lzo-1.08_2.03_static-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/bx_python-0.7.1_7b95ff194725-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/amqplib-0.6.1-py2.6.egg, 
galaxy-dev/eggs/pexpect-2.4-py2.6.egg, 
galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg, 
galaxy-dev/eggs/Babel-0.9.4-py2.6.egg, 
galaxy-dev/eggs/MarkupSafe-0.12-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/Mako-0.4.1-py2.6.egg, 
galaxy-dev/eggs/WebHelpers-0.2-py2.6.egg, 
galaxy-dev/eggs/simplejson-2.1.1-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/wchartype-0.1-py2.6.egg, 
galaxy-dev/eggs/elementtree-1.2.6_20050316-py2.6.egg, 
galaxy-dev/eggs/docutils-0.7-py2.6.egg, 
galaxy-dev/eggs/WebOb-0.8.5-py2.6.egg, 
galaxy-dev/eggs/Routes-1.12.3-py2.6.egg, 
galaxy-dev/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/PasteDeploy-1.3.3-py2.6.egg, 
galaxy-dev/eggs/PasteScript-1.7.3-py2.6.egg, 
galaxy-dev/eggs/Paste-1.6-py2.6.egg, galaxy-dev/lib, 
/usr/lib64/python2.6/site-packages/distribute-0.6.12-py2.6.egg, 
/usr/lib64/python2.6/site-packages/blist-1.3.4-py2.6-linux-x86_64.egg, 
/usr/lib/python2.6/site-packages/nose-1.0.0-py2.6.egg, 
/usr/lib/python2.6/site-packages/argparse-1.2.1-py2.6.egg, 
/usr/lib/python2.6/site-packages/pip-1.2.1-py2.6.egg, 
/usr/lib/python2.6/site-packages, /usr/lib64/python2.6/xml/etree, 
/usr/lib64/python26.zip, /usr/lib64/python2.6, 
/usr/lib64/python2.6/plat-linux2, /usr/lib64/python2.6/lib-tk, 
/usr/lib64/python2.6/lib-old, /usr/lib64/python2.6/lib-dynload, 
/usr/lib64/python2.6/site-packages/PIL, 
/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info, 
/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info, 
/usr/lib64/python2.6/site-packages 
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loading 
references to tool sheds from tool_sheds_conf.xml 
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loaded 
reference to tool shed: Galaxy main tool shed 
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loaded 
reference to tool shed: Galaxy test tool shed 
galaxy.model.migrate.check DEBUG 2013-01-21 13:52:06,112 pysqlite>=2 egg 
successfully loaded for sqlite dialect 
Traceback (most recent call last): 
  File "galaxy-dev/lib/galaxy/webapps/galaxy/buildapp.py", line 36, in 
app_factory 
app = UniverseApplication( global_conf = global_conf, **kwargs ) 
  File "galaxy-dev/lib/galaxy/app.py", line 45, in __init__ 
create_or_verify_database( db_url, kwargs.get( 'global_conf', {} 
).get( '__file__', None ), self.config.database_engine_options, app=self ) 

  File "galaxy-dev/lib/galaxy/model/migrate/check.py", line 46, in 
create_or_verify_database 
engine = create_engine( url, **engine_options ) 
  File 
"galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/__init__.py",
 
line 223, in create_engine 
return strategy.create(*args, **kwargs) 
  File 
"galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/strategies.py",
 
line 121, in create 
engineclass.__name__)) 
TypeError: Invalid argument(s) 'server_side_cursors','max_overflow' sent 
to create_engine(), using configuration 
SQLi

[galaxy-dev] Fw: Error when attempting to install new distributuion - TypeError: Invalid argument(s) 'server_side_cursors', 'max_overflow'

2013-01-23 Thread Liisa Koski
Hello,
I am attempting to install a new galaxy-dist and am running into the 
following error. I first tried it by using a mysql database but this error 
also occurs when I use the default SQLite database.
Any help would be much appreciated.
Thanks in advance,
Liisa

[galaxy-dev]$ ./run.sh
Some eggs are out of date, attempting to fetch...
Fetched 
http://eggs.galaxyproject.org/pysqlite/pysqlite-2.5.6_3.6.17_static-py2.6-linux-x86_64-ucs4.egg
Fetch successful.
galaxy-dev/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1:
 
RuntimeWarning: __builtin__.file size changed, may indicate binary 
incompatibility
  from csamtools import *
python path is: galaxy-dev/eggs/numpy-1.6.0-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/boto-2.5.2-py2.6.egg, 
galaxy-dev/eggs/mercurial-2.2.3-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/Fabric-1.4.2-py2.6.egg, 
galaxy-dev/eggs/ssh-1.7.14-py2.6.egg, 
galaxy-dev/eggs/Whoosh-0.3.18-py2.6.egg, 
galaxy-dev/eggs/pycrypto-2.5-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/python_lzo-1.08_2.03_static-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/bx_python-0.7.1_7b95ff194725-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/amqplib-0.6.1-py2.6.egg, 
galaxy-dev/eggs/pexpect-2.4-py2.6.egg, 
galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg, 
galaxy-dev/eggs/Babel-0.9.4-py2.6.egg, 
galaxy-dev/eggs/MarkupSafe-0.12-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/Mako-0.4.1-py2.6.egg, 
galaxy-dev/eggs/WebHelpers-0.2-py2.6.egg, 
galaxy-dev/eggs/simplejson-2.1.1-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/wchartype-0.1-py2.6.egg, 
galaxy-dev/eggs/elementtree-1.2.6_20050316-py2.6.egg, 
galaxy-dev/eggs/docutils-0.7-py2.6.egg, 
galaxy-dev/eggs/WebOb-0.8.5-py2.6.egg, 
galaxy-dev/eggs/Routes-1.12.3-py2.6.egg, 
galaxy-dev/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg, 
galaxy-dev/eggs/PasteDeploy-1.3.3-py2.6.egg, 
galaxy-dev/eggs/PasteScript-1.7.3-py2.6.egg, 
galaxy-dev/eggs/Paste-1.6-py2.6.egg, galaxy-dev/lib, 
/usr/lib64/python2.6/site-packages/distribute-0.6.12-py2.6.egg, 
/usr/lib64/python2.6/site-packages/blist-1.3.4-py2.6-linux-x86_64.egg, 
/usr/lib/python2.6/site-packages/nose-1.0.0-py2.6.egg, 
/usr/lib/python2.6/site-packages/argparse-1.2.1-py2.6.egg, 
/usr/lib/python2.6/site-packages/pip-1.2.1-py2.6.egg, 
/usr/lib/python2.6/site-packages, /usr/lib64/python2.6/xml/etree, 
/usr/lib64/python26.zip, /usr/lib64/python2.6, 
/usr/lib64/python2.6/plat-linux2, /usr/lib64/python2.6/lib-tk, 
/usr/lib64/python2.6/lib-old, /usr/lib64/python2.6/lib-dynload, 
/usr/lib64/python2.6/site-packages/PIL, 
/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info, 
/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info, 
/usr/lib64/python2.6/site-packages
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loading 
references to tool sheds from tool_sheds_conf.xml
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loaded 
reference to tool shed: Galaxy main tool shed
galaxy.tool_shed.tool_shed_registry DEBUG 2013-01-21 13:52:06,034 Loaded 
reference to tool shed: Galaxy test tool shed
galaxy.model.migrate.check DEBUG 2013-01-21 13:52:06,112 pysqlite>=2 egg 
successfully loaded for sqlite dialect
Traceback (most recent call last):
  File "galaxy-dev/lib/galaxy/webapps/galaxy/buildapp.py", line 36, in 
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File "galaxy-dev/lib/galaxy/app.py", line 45, in __init__
create_or_verify_database( db_url, kwargs.get( 'global_conf', {} 
).get( '__file__', None ), self.config.database_engine_options, app=self )
  File "galaxy-dev/lib/galaxy/model/migrate/check.py", line 46, in 
create_or_verify_database
engine = create_engine( url, **engine_options )
  File 
"galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/__init__.py",
 
line 223, in create_engine
return strategy.create(*args, **kwargs)
  File 
"galaxy-dev/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/strategies.py",
 
line 121, in create
engineclass.__name__))
TypeError: Invalid argument(s) 'server_side_cursors','max_overflow' sent 
to create_engine(), using configuration 
SQLiteDialect/SingletonThreadPool/TLEngine.  Please check that the keyword 
arguments are appropriate for this combination of components.___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] DRMAA runner weirdness

2013-01-15 Thread Liisa Koski
In our case someone had installed and started a second development 
instance of galaxy but used the same database as the first development 
instance. So the ids were mixed up and and causing some jobs to crash. 
Yuck!

Thanks,
Liisa



From:   Nate Coraor 
To: Liisa Koski 
Cc: kellr...@soe.ucsc.edu, "galaxy-dev@lists.bx.psu.edu" 
, galaxy-dev-boun...@lists.bx.psu.edu
Date:   14/01/2013 10:48 AM
Subject:Re: [galaxy-dev] DRMAA runner weirdness



On Jan 11, 2013, at 9:32 AM, Liisa Koski wrote:

> Hello, 
> Can you please post the link to this patch? I do not see it in the mail 
thread and I too have noticed some issues with the DRMAA job running since 
updating to the Oct. 23rd distribution. I don't know if it is related yet 
but I'd like to try the patch to see. I have two local instances of Galaxy 
(prod and dev). On my dev instance (which is fully up to date), when I run 
the same job multiple times, sometimes it finishes and sometimes it dies, 
this is independent of which node it runs on. My prod instance is still at 
the Oct. 03 distribution and does not experience this problem. So I am 
afraid to update our production instance. 
> 
> Thanks in advance, 
> Liisa 

Hi Liisa,

Here's the one that Kyle is referring to:


https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0


However, this patch should only fix the problem of the server segfaulting 
when deleting certain jobs (ones that have not yet been dispatched to the 
cluster).

--nate

> 
> 
> 
> 
> From:Kyle Ellrott  
> To:Nate Coraor  
> Cc:"galaxy-dev@lists.bx.psu.edu"  
> Date:10/01/2013 07:44 PM 
> Subject:Re: [galaxy-dev] DRMAA runner weirdness 
> Sent by:galaxy-dev-boun...@lists.bx.psu.edu 
> 
> 
> 
> I did a merge of galaxy-central that included the patch you posted 
today. The scheduling problem seems to have gone away. Although I'm still 
getting back 'Job output not returned from cluster' for errors. This seems 
odd, as the system previously would output stderr correctly. 
> 
> Kyle 
> 
> 
> On Thu, Jan 10, 2013 at 8:30 AM, Nate Coraor  wrote: 
> On Jan 9, 2013, at 12:18 AM, Kyle Ellrott wrote:
> 
> > I'm running a test Galaxy system on a cluster (merged galaxy-dist on 
Janurary 4th). And I've noticed some odd behavior from the DRMAA job 
runner.
> > I'm running a multithread system, one web server, one job_manager, and 
three job_handlers. DRMAA is the default job runner (the command for 
tophat2 is drmaa://-V -l mem_total=7G -pe smp 2/), with SGE 6.2u5 being 
the engine underneath.
> >
> > My test involves trying to run three different Tophat2 jobs. The first 
two seem to start up (and get put on the SGE queue), but the third stays 
grey, with the job manager listing it in state 'new' with command line 
'None'. It doesn't seem to leave this state. Both of the jobs that 
actually got onto the queue die (reasons unknown, but much to early, 
probably some tophat/bowtie problem), but one job is listed in error state 
with stderr as 'Job output not returned from cluster', while the other job 
(which is no longer in the SGE queue) is still listed as running.
> 
> Hi Kyle,
> 
> It sounds like there are bunch of issues here.  Do you have any limits 
set as to the number of concurrent jobs allowed?  If not, you may need to 
add a bit of debugging information to the manager or handler code to 
figure out why the 'new' job is not being dispatched for execution.
> 
> For the 'error' job, more information about output collection should be 
available from the Galaxy server log.  If you have general SGE problems 
this may not be Galaxy's fault.  You do need to make sure that the 
stdout/stderr files are able to be properly copied back to the Galaxy 
server upon job completion.
> 
> For the 'running' job, make sure you've got 'set_metadata_externally = 
True' in your Galaxy config.
> 
> --nate
> 
> >
> > Any ideas?
> >
> >
> > Kyle
> > ___
> > Please keep all replies on the list by using "reply all"
> > in your mail client.  To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> >
> >  http://lists.bx.psu.edu/
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/ 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] DRMAA runner weirdness

2013-01-11 Thread Liisa Koski
Hello,
Can you please post the link to this patch? I do not see it in the mail 
thread and I too have noticed some issues with the DRMAA job running since 
updating to the Oct. 23rd distribution. I don't know if it is related yet 
but I'd like to try the patch to see. I have two local instances of Galaxy 
(prod and dev). On my dev instance (which is fully up to date), when I run 
the same job multiple times, sometimes it finishes and sometimes it dies, 
this is independent of which node it runs on. My prod instance is still at 
the Oct. 03 distribution and does not experience this problem. So I am 
afraid to update our production instance. 

Thanks in advance,
Liisa




From:   Kyle Ellrott 
To: Nate Coraor 
Cc: "galaxy-dev@lists.bx.psu.edu" 
Date:   10/01/2013 07:44 PM
Subject:Re: [galaxy-dev] DRMAA runner weirdness
Sent by:galaxy-dev-boun...@lists.bx.psu.edu



I did a merge of galaxy-central that included the patch you posted 
today. The scheduling problem seems to have gone away. Although I'm still 
getting back 'Job output not returned from cluster' for errors. This seems 
odd, as the system previously would output stderr correctly.

Kyle


On Thu, Jan 10, 2013 at 8:30 AM, Nate Coraor  wrote:
On Jan 9, 2013, at 12:18 AM, Kyle Ellrott wrote:

> I'm running a test Galaxy system on a cluster (merged galaxy-dist on 
Janurary 4th). And I've noticed some odd behavior from the DRMAA job 
runner.
> I'm running a multithread system, one web server, one job_manager, and 
three job_handlers. DRMAA is the default job runner (the command for 
tophat2 is drmaa://-V -l mem_total=7G -pe smp 2/), with SGE 6.2u5 being 
the engine underneath.
>
> My test involves trying to run three different Tophat2 jobs. The first 
two seem to start up (and get put on the SGE queue), but the third stays 
grey, with the job manager listing it in state 'new' with command line 
'None'. It doesn't seem to leave this state. Both of the jobs that 
actually got onto the queue die (reasons unknown, but much to early, 
probably some tophat/bowtie problem), but one job is listed in error state 
with stderr as 'Job output not returned from cluster', while the other job 
(which is no longer in the SGE queue) is still listed as running.

Hi Kyle,

It sounds like there are bunch of issues here.  Do you have any limits set 
as to the number of concurrent jobs allowed?  If not, you may need to add 
a bit of debugging information to the manager or handler code to figure 
out why the 'new' job is not being dispatched for execution.

For the 'error' job, more information about output collection should be 
available from the Galaxy server log.  If you have general SGE problems 
this may not be Galaxy's fault.  You do need to make sure that the 
stdout/stderr files are able to be properly copied back to the Galaxy 
server upon job completion.

For the 'running' job, make sure you've got 'set_metadata_externally = 
True' in your Galaxy config.

--nate

>
> Any ideas?
>
>
> Kyle
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
>  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Can't edit galaxy workflow

2012-11-26 Thread Liisa Koski
Hi,
Can you please send the entire link to access this changeset? I keep 
searching for 5013377e0bf7 but can not find it.


Thanks in advance,
Liisa



From:   Dannon Baker 
To: Sanjarbek Hudaiberdiev 
Cc: galaxy-...@bx.psu.edu
Date:   13/11/2012 12:34 PM
Subject:Re: [galaxy-dev] Can't edit galaxy workflow
Sent by:galaxy-dev-boun...@lists.bx.psu.edu



Sanjar,

This is fixed as of 5013377e0bf7.  This may not be in the next 
distribution, but will be in the one after that.  Of course, you can 
manually pull the change from galaxy-central at any time.

-Dannon


On Nov 13, 2012, at 9:45 AM, Sanjarbek Hudaiberdiev  
wrote:

> I tried to reply to similar posts, but couldn't figure out how to do it. 
So posting again:
> 
> Galaxy giving this error when editing workflow, just after creating 
workflow from existing history:
> 
> URL: 
http://localhost/galaxy/workflow/load_workflow?id=df7a1f0c02a5b08e&_=1352817462148

> Module weberror.evalexception.middleware:364 in respond
> >>  app_iter = self.application(environ, detect_start_response)
> Module paste.debug.prints:98 in __call__
> >>  environ, self.app)
> Module paste.wsgilib:539 in intercept_output
> >>  app_iter = application(environ, replacement_start_response)
> Module paste.recursive:80 in __call__
> >>  return self.application(environ, start_response)
> Module paste.httpexceptions:632 in __call__
> >>  return self.application(environ, start_response)
> Module galaxy.web.framework.base:160 in __call__
> >>  body = method( trans, **kwargs )
> Module galaxy.web.framework:73 in decorator
> >>  return simplejson.dumps( func( self, trans, *args, **kwargs ) )
> Module galaxy.webapps.galaxy.controllers.workflow:733 in load_workflow
> >>  'tooltip': module.get_tooltip( static_path=url_for( '/static' ) ),
> Module galaxy.workflow.modules:258 in get_tooltip
> >>  return self.tool.help.render( static_path=static_path )
> AttributeError: 'NoneType' object has no attribute 'render'
> 
> Could anyone help me to solve this problem?
> 
> Thanks,
> Sanjar.
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Error when running cleanup_datasets.py

2012-11-12 Thread Liisa Koski
That did it! Thanks Nate!




From:   Nate Coraor 
To: Liisa Koski 
Cc: galaxy-dev@lists.bx.psu.edu
Date:   12/11/2012 12:13 PM
Subject:Re: [galaxy-dev] Error when running cleanup_datasets.py



On Nov 8, 2012, at 8:55 PM, Liisa Koski wrote:

> Hi Nate, 
> I'm back to trying to figure this out again as I am running out of disk 
space. I added the bit of code you suggested below, but I don't think it 
helped, I'm not so familiar with python. 
> 
> I'm now running Galaxy Reports and it tells me that I have 7479 datasets 
that were deleted but have not yet been purged. 
> 
> I get the error below when I run cleanup_datasets.py with both the -5 
and -4 flag 
> 
> Marking as deleted: LibraryDatasetDatasetAssociation id 6907 (for 
dataset id 51991) 
> Deleting dataset id 51991 
> Deleting library dataset id  7225 
> Traceback (most recent call last): 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 526, in 
 
> if __name__ == "__main__": main() 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 124, in main 

> purge_folders( app, cutoff_time, options.remove_from_disk, info_only 
= options.info_only, force_retry = options.force_retry ) 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 247, in 
purge_folders 
> _purge_folder( folder, app, remove_from_disk, info_only = info_only 
) 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 499, in 
_purge_folder 
> _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 499, in 
_purge_folder 
> _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder 
> _purge_dataset_instance( ldda, app, remove_from_disk, info_only = 
info_only ) #mark a DatasetInstance as deleted, clear associated files, 
and mark the Dataset as deleted if it is deletable 
>   File "scripts/cleanup_datasets/cleanup_datasets.py", line 373, in 
_purge_dataset_instance 
> log.debug( '%s %s has None dataset' % ( type( dataset_instance ), 
dataset_instance.id ) ) 
> AttributeError: 'NoneType' object has no attribute 'id' 

Ah, I was looking at the wrong level.  You have a library_dataset without 
an associated library_dataset_dataset_association.  The following SQL 
should return it and any others:

select id from library_dataset where 
library_dataset_dataset_association_id is null

Although the output indicates that the offending library_dataset id should 
be 7225.  The easiest way to solve this problem is probably to orphan the 
broken library dataset, e.g.:

update library_dataset set folder_id = null where id=7225;

--nate 

> 
> Thanks in advance for your help, 
> Liisa 
> 
> 
> 
> 
> 
> From:    Nate Coraor  
> To:Liisa Koski  
> Cc:galaxy-dev@lists.bx.psu.edu 
> Date:02/10/2012 10:50 AM 
> Subject:Re: [galaxy-dev] Error when running cleanup_datasets.py 
> 
> 
> 
> On Oct 2, 2012, at 10:44 AM, Liisa Koski wrote:
> 
> > Hi Nate, 
> > That select statement does not return anything :( 
> 
> Could you add a bit of debugging to the script to see what the id is of 
the dataset_instance that has a None dataset?
> 
> if dataset_instance is None:
>log.debug( '%s %s has None dataset' % ( type( dataset_instance ), 
dataset_instance.id ) )
> 
> Thanks,
> --nate
> 
> > 
> > Thanks, 
> > Liisa 
> > 
> > 
> > 
> > 
> > 
> > From:Nate Coraor  
> > To:Liisa Koski  
> > Cc:galaxy-dev@lists.bx.psu.edu 
> > Date:01/10/2012 01:01 PM 
> > Subject:Re: [galaxy-dev] Error when running 
cleanup_datasets.py 
> > 
> > 
> > 
> > On Sep 24, 2012, at 10:41 AM, Liisa Koski wrote:
> > 
> > > Hello, 
> > > I am trying to run the cleanup scripts on my local installation but 
get stuck when trying to run the following: 
> > > 
> > > ./scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini -d 
10 -5 -r 
> > > 
> > > Deleting library dataset id  7225 
> > > Traceback (most recent call last): 
> > >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 524, 
in  
> > > if __name__ == "__main__": main() 
> > >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 124, 
in main 
> > > purge_folders( app, cutoff_time, options.remove_from_disk, 
info_only = options.info_only, force_retry = options.force_ret

Re: [galaxy-dev] Error when running cleanup_datasets.py

2012-11-08 Thread Liisa Koski
Hi Nate,
I'm back to trying to figure this out again as I am running out of disk 
space. I added the bit of code you suggested below, but I don't think it 
helped, I'm not so familiar with python.

I'm now running Galaxy Reports and it tells me that I have 7479 datasets 
that were deleted but have not yet been purged.

I get the error below when I run cleanup_datasets.py with both the -5 and 
-4 flag

Marking as deleted: LibraryDatasetDatasetAssociation id 6907 (for dataset 
id 51991)
Deleting dataset id 51991
Deleting library dataset id  7225
Traceback (most recent call last):
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 526, in 

if __name__ == "__main__": main()
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 124, in main
purge_folders( app, cutoff_time, options.remove_from_disk, info_only = 
options.info_only, force_retry = options.force_retry )
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 247, in 
purge_folders
_purge_folder( folder, app, remove_from_disk, info_only = info_only )
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 499, in 
_purge_folder
_purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only )
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 499, in 
_purge_folder
_purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only )
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder
_purge_dataset_instance( ldda, app, remove_from_disk, info_only = 
info_only ) #mark a DatasetInstance as deleted, clear associated files, 
and mark the Dataset as deleted if it is deletable
  File "scripts/cleanup_datasets/cleanup_datasets.py", line 373, in 
_purge_dataset_instance
log.debug( '%s %s has None dataset' % ( type( dataset_instance ), 
dataset_instance.id ) )
AttributeError: 'NoneType' object has no attribute 'id'

Thanks in advance for your help,
Liisa





From:   Nate Coraor 
To: Liisa Koski 
Cc: galaxy-dev@lists.bx.psu.edu
Date:   02/10/2012 10:50 AM
Subject:Re: [galaxy-dev] Error when running cleanup_datasets.py



On Oct 2, 2012, at 10:44 AM, Liisa Koski wrote:

> Hi Nate, 
> That select statement does not return anything :( 

Could you add a bit of debugging to the script to see what the id is of 
the dataset_instance that has a None dataset?

if dataset_instance is None:
log.debug( '%s %s has None dataset' % ( type( dataset_instance ), 
dataset_instance.id ) )

Thanks,
--nate

> 
> Thanks, 
> Liisa 
> 
> 
> 
> 
> 
> From:Nate Coraor  
> To:Liisa Koski  
> Cc:galaxy-dev@lists.bx.psu.edu 
> Date:01/10/2012 01:01 PM 
> Subject:Re: [galaxy-dev] Error when running cleanup_datasets.py 
> 
> 
> 
> On Sep 24, 2012, at 10:41 AM, Liisa Koski wrote:
> 
> > Hello, 
> > I am trying to run the cleanup scripts on my local installation but 
get stuck when trying to run the following: 
> > 
> > ./scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini -d 10 
-5 -r 
> > 
> > Deleting library dataset id  7225 
> > Traceback (most recent call last): 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 524, in 
 
> > if __name__ == "__main__": main() 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 124, in 
main 
> > purge_folders( app, cutoff_time, options.remove_from_disk, 
info_only = options.info_only, force_retry = options.force_retry ) 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 247, in 
purge_folders 
> > _purge_folder( folder, app, remove_from_disk, info_only = 
info_only ) 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder 
> > _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder 
> > _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 495, in 
_purge_folder 
> > _purge_dataset_instance( ldda, app, remove_from_disk, info_only = 
info_only ) #mark a DatasetInstance as deleted, clear associated files, 
and mark the Dataset as deleted if it is deletable 
> >   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 376, in 
_purge_dataset_instance 
> > ( dataset_instance.__class__.__name__, dataset_instance.id, 
dataset_instance.dataset.id ) 
> > AttributeError: 'NoneType' object has no attribute 'id' 
> 
> Hi Liisa,
> 
> It'd

[galaxy-dev] How to get a shedtool to run locally

2012-10-15 Thread Liisa Koski
Hello,
I've installed a tool to my local Galaxy installation via my local 
ToolShed. I would like to run this tool locally, and not have it submitted 
to the grid.

I have tried two ids in the universe_wsgi.ini [galaxy:tool_runners] 
section.

The tool id directly from the tool.xml file and the id from the 
shed_tool_conf.xml (same id found in the integrated_tool_panel.xml)

tool = local:///
galaxy.server:9009/repos/user/tool/1.0 = local:///


My job still gets submitted to the grid.

Any help would be much appreciated.

Thanks,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Error when running cleanup_datasets.py

2012-10-02 Thread Liisa Koski
Hi Nate,
That select statement does not return anything :(

Thanks,
Liisa





From:   Nate Coraor 
To: Liisa Koski 
Cc: galaxy-dev@lists.bx.psu.edu
Date:   01/10/2012 01:01 PM
Subject:Re: [galaxy-dev] Error when running cleanup_datasets.py



On Sep 24, 2012, at 10:41 AM, Liisa Koski wrote:

> Hello, 
> I am trying to run the cleanup scripts on my local installation but get 
stuck when trying to run the following: 
> 
> ./scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini -d 10 
-5 -r 
> 
> Deleting library dataset id  7225 
> Traceback (most recent call last): 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 524, in 
 
> if __name__ == "__main__": main() 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 124, in 
main 
> purge_folders( app, cutoff_time, options.remove_from_disk, info_only 
= options.info_only, force_retry = options.force_retry ) 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 247, in 
purge_folders 
> _purge_folder( folder, app, remove_from_disk, info_only = info_only 
) 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder 
> _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder 
> _purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only ) 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 495, in 
_purge_folder 
> _purge_dataset_instance( ldda, app, remove_from_disk, info_only = 
info_only ) #mark a DatasetInstance as deleted, clear associated files, 
and mark the Dataset as deleted if it is deletable 
>   File "./scripts/cleanup_datasets/cleanup_datasets.py", line 376, in 
_purge_dataset_instance 
> ( dataset_instance.__class__.__name__, dataset_instance.id, 
dataset_instance.dataset.id ) 
> AttributeError: 'NoneType' object has no attribute 'id' 

Hi Liisa,

It'd appear that you have a library_dataset_dataset_association in your 
dataset that lacks an associated dataset.  Does 'select id from 
library_dataset_dataset_association where dataset_id is null' in your 
database return anything?

--nate

> 
> 
> Any help would be much appreciated. 
> 
> Thanks, 
> Liisa 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can't create custom visualization

2012-09-27 Thread Liisa Koski
Hello,
I'm trying to create a custom visualization in Trackster. I am doing the 
following steps. Visualization -> New Visualization -> Add a Custom Build

I get directed to the url below:

http://galaxy.ca:8080/user/dbkeys?use_panels=True

But the web page says:

Server Error
An error occurred. See the error logs for more information. (Turn debug on 
to display exception reports here) 

My paster.log file shows the following:

WSGI Variables
--
  application: 
  paste.cookies: (, 
'__utma=99541067.1404064366.1320847925.1320847925.1320857512.2; 
galaxysession=c6ca0ddb55be603a67ec94afb2c9a07cafdf91af0226f4689d096411ea1bc0c1004977df81ecca90;
 
galaxycommunitysession=eb142648ac45b770e95464ae1d51cc6457899dd48ca7f3e61a826ece0a2b5d2a65f4ac4aabb6e5e9;
 
toolshedgalaxyurl="http://martin.dnalandmarks.ca:8080/";; 
galaxyreportssession=c6ca0ddb55be603a922aa045b4afae662fca8487217f2ea25e5b488dcf5d52aefac5347681c8bf98')
  paste.expected_exceptions: []
  paste.httpexceptions: 
  paste.httpserver.thread_pool: 
  paste.parsed_querystring: ([('use_panels', 'True')], 'use_panels=True')
  paste.recursive.forward: 
  paste.recursive.include: 
  paste.recursive.include_app_iter: 
  paste.recursive.script_name: ''
  paste.throw_errors: True
  webob._parsed_query_vars: (MultiDict([('use_panels', 'True')]), 
'use_panels=True')
  wsgi process: 'Multithreaded'


Any help would be much appreciated.


Thanks,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] problem installing blast_datatypes manually

2012-09-26 Thread Liisa Koski
Thanks Peter,
It was in fact the snffier  type.  I changed it to 

I am using the July07 distribution. The reason I installed the datatypes 
manually is because when I did it via the directions with the toolshed all 
my workflows that used blast tools were broken. They could no longer find 
the blast tools. So I immediately removed the toolshed installation and 
tried to set up as before. With the blast wrappers back in the 
tools/ncbi_blast_plus directory. Now my workflows are functional again.

Thanks for your help,
Liisa





From:   Peter Cock 
To: "galaxy-dev@lists.bx.psu.edu" 
Cc: Liisa Koski 
Date:   26/09/2012 04:33 AM
Subject:Re: [galaxy-dev] problem installing blast_datatypes 
manually



On Tue, Sep 25, 2012 at 8:39 PM, Peter Cock  
wrote:
> On Tuesday, September 25, 2012, Liisa Koski wrote:
>>
>> Hello,
>> I followed the instructions below to manually install the 
blast_datatypes:
>>
>> Manual Installation
>> ===
>>
>> Normally you would install this via the Galaxy ToolShed, which would 
move
>> the provided blast.py file into a suitable location and process the
>> datatypes_conf.xml entry to be combined with your local configuration.
>>
>> However, if you really want to this should work for a manual install. 
Add
>> the following line to the datatypes_conf.xml file in the Galaxy main
>> folder:
>>
>>> mimetype="application/xml" display_in_upload="true"/>
>>
>> Also create the file lib/galaxy/datatypes/blast.py by moving, copying 
or
>> linking the blast.py file provided in this tar-ball.  Finally add 
'import blast'
>> near
>> the start of file lib/galaxy/datatypes/registry.py (after the other 
import
>> lines).
>>
>> =
>>
>> I restarted my local Galaxy instance but still get this error.
>>
>> WARNING:galaxy.datatypes.registry:Error appending sniffer for datatype
>> 'galaxy.datatypes.xml:BlastXml' to sniff_order: 'module' object has no
>> attribute 'BlastXml'
>>
>>
>> Any help would be much appreciated.
>> Thanks,
>> Liisa
>
>
> The error message sounds like your XML file is using the old location of 
the
> BlastXml class (it used to be in an xml.py file, now it is in blast.py
> instead). Can you grep the XML file for Blast? (Use -I for case 
insensitive)
>
> Sadly right now our Galaxy server is offline (suspected disk failure), 
so I
> may not be able to double check what is on our machine. I'll try to have 
a
> look at work tomorrow though.

My guess is you have this, with an out of date sniffer line from when
BLAST+ was part of the main distribution:




And you should have:

$ grep -i blast datatypes_conf.xml



Or, if you leave out the sniffer:

$ grep -i blast datatypes_conf.xml


The sniffer is important to allow the user to upload BLAST XML files
and have them automatically recognised as such. I see that I had not
mentioned that in the tool's README file, an oversight I will fix in the
next upload to the tool shed:
https://bitbucket.org/peterjc/galaxy-central/changeset/5cefd5d5536ea9bc11021c4c1e0b8937175e4ba1


> (Out of interest, was there a reason you didn't use the automatic 
install
> from the ToolShed?)

I should probably have also checked - are you running a recent
version of Galaxy where the NCBI BLAST+ wrappers have been
removed from the core distribution?

Regards,

Peter

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] problem installing blast_datatypes manually

2012-09-25 Thread Liisa Koski
Hello,
I followed the instructions below to manually install the blast_datatypes:

Manual Installation
===

Normally you would install this via the Galaxy ToolShed, which would move
the provided blast.py file into a suitable location and process the
datatypes_conf.xml entry to be combined with your local configuration.

However, if you really want to this should work for a manual install. Add
the following line to the datatypes_conf.xml file in the Galaxy main 
folder:

   

Also create the file lib/galaxy/datatypes/blast.py by moving, copying or 
linking
the blast.py file provided in this tar-ball.  Finally add 'import blast' 
near
the start of file lib/galaxy/datatypes/registry.py (after the other import
lines).

=

I restarted my local Galaxy instance but still get this error.

WARNING:galaxy.datatypes.registry:Error appending sniffer for datatype 
'galaxy.datatypes.xml:BlastXml' to sniff_order: 'module' object has no 
attribute 'BlastXml'


Any help would be much appreciated.
Thanks,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can't edit Galaxy Workflow _ElementInterface instance has no attribute 'render'

2012-09-24 Thread Liisa Koski
Hello,
After updating to the Sept. 07 distribution I am having problems editing 
an existing workflow.


Server error
URL: 
http:galaxy_url/workflow/load_workflow?id=ba751ee0539fff04&_=1348501448807
Module paste.exceptions.errormiddleware:143 in __call__
>>  app_iter = self.application(environ, start_response)
Module paste.debug.prints:98 in __call__
>>  environ, self.app)
Module paste.wsgilib:539 in intercept_output
>>  app_iter = application(environ, replacement_start_response)
Module paste.recursive:80 in __call__
>>  return self.application(environ, start_response)
Module paste.httpexceptions:632 in __call__
>>  return self.application(environ, start_response)
Module galaxy.web.framework.base:160 in __call__
>>  body = method( trans, **kwargs )
Module galaxy.web.framework:69 in decorator
>>  return simplejson.dumps( func( self, trans, *args, **kwargs ) )
Module galaxy.web.controllers.workflow:735 in load_workflow
>>  'tooltip': module.get_tooltip( static_path=url_for( '/static' ) ),
Module galaxy.workflow.modules:262 in get_tooltip
>>  return self.tool.help.render( static_path=static_path )
AttributeError: _ElementInterface instance has no attribute 'render'

Any help would be much appreciated.

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error when running cleanup_datasets.py

2012-09-24 Thread Liisa Koski
Hello,
I am trying to run the cleanup scripts on my local installation but get 
stuck when trying to run the following:

./scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini -d 10 -5 
-r

Deleting library dataset id  7225
Traceback (most recent call last):
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 524, in 

if __name__ == "__main__": main()
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 124, in main
purge_folders( app, cutoff_time, options.remove_from_disk, info_only = 
options.info_only, force_retry = options.force_retry )
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 247, in 
purge_folders
_purge_folder( folder, app, remove_from_disk, info_only = info_only )
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder
_purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only )
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 497, in 
_purge_folder
_purge_folder( sub_folder, app, remove_from_disk, info_only = 
info_only )
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 495, in 
_purge_folder
_purge_dataset_instance( ldda, app, remove_from_disk, info_only = 
info_only ) #mark a DatasetInstance as deleted, clear associated files, 
and mark the Dataset as deleted if it is deletable
  File "./scripts/cleanup_datasets/cleanup_datasets.py", line 376, in 
_purge_dataset_instance
( dataset_instance.__class__.__name__, dataset_instance.id, 
dataset_instance.dataset.id )
AttributeError: 'NoneType' object has no attribute 'id'


Any help would be much appreciated.

Thanks,
Liisa
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error in db upgrade when updating to May11 distribution

2012-06-13 Thread Liisa Koski
Hello,
I get an error when trying to upgrade my mysql db with sh manage_db.sh 
upgrade from version 94->95 (see below).

Will cause problems with my installation?




sh manage_db.sh upgrade
93 -> 94...

Migration script to create "handler" column in job table.

done
94 -> 95...

Migration script to create table for tracking history_dataset_association 
subsets.

(OperationalError) (1059, "Identifier name 
'ix_history_dataset_association_subset_history_dataset_association_id' is 
too long") u'CREATE INDEX 
ix_history_dataset_association_subset_history_dataset_association_id ON 
history_dataset_association_subset (history_dataset_association_id)' ()
done
95 -> 96...

Migration script to add column to openid table for provider.
Remove any OpenID entries with nonunique GenomeSpace Identifier

done
96 -> 97...

Migration script to add the ctx_rev column to the tool_shed_repository 
table.

done

Thanks in advance,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Problems saving a cloned workflow

2012-05-03 Thread Liisa Koski
Hello,
I cloned a workflow on my local Galaxy installation, renamed it, made some 
edits and pressed save. It has been saving now for about 3 hours. It only 
has 12 steps.

Any suggestions?

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error running set_dataset_sizes.py

2012-04-19 Thread Liisa Koski
Hello,
I'm seeing some discrepancies in total user usage versus what my histories 
actually total so I wanted to run set_dataset_sizes.py  and 
set_user_disk_usage.py 

I am getting the following error.

 ./set_dataset_sizes.py
Loading Galaxy model...
Processing 77915 datasets...
Completed 0%
Traceback (most recent call last):
  File "./set_dataset_sizes.py", line 43, in 
dataset.set_total_size()
  File "lib/galaxy/model/__init__.py", line 703, in set_total_size
if self.object_store.exists(self, extra_dir=self._extra_files_path or 
"dataset_%d_files" % self.id, dir_only=True):
AttributeError: 'NoneType' object has no attribute 'exists'


Any help would be much appreciated.

Thanks,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Can't clone workflows

2012-03-21 Thread Liisa Koski
Hi Dannon,
Yes...the tags have piled up. I will apply this fix and try to remove some 
of the existing tags.
Thanks for your suggestion.

Liisa




From:   Dannon Baker 
To: Liisa Koski 
Cc: galaxy-dev@lists.bx.psu.edu
Date:   21/03/2012 09:08 AM
Subject:Re: [galaxy-dev] Can't clone workflows



Liisa,

I'm not able to reproduce this locally with a fresh galaxy-dist.  Is there 
anything unique about your workflows or configuration here?  And, this 
might be a long shot, but do you frequently use tags with your workflows? 
There was an issue that I've fixed with this recently that would cause a 
significant hang when cloning.

-Dannon


On Feb 29, 2012, at 4:03 PM, Liisa Koski wrote:

> Hello, 
> I have lost the ability to clone workflows in my local installation of 
Galaxy (the latest galaxy-dist). When I try...it just hangs...and hangs... 

> 
> Any help would be much appreciated. 
> 
> Thanks, 
> Liisa 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can't clone workflows

2012-02-29 Thread Liisa Koski
Hello,
I have lost the ability to clone workflows in my local installation of 
Galaxy (the latest galaxy-dist). When I try...it just hangs...and hangs...

Any help would be much appreciated.

Thanks,
Liisa


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Error Setting BAM Metadata

2012-02-03 Thread Liisa Koski
Yes it isand my galaxy is up to date.

Thanks,
Liisa




From:
Nate Coraor 
To:
Liisa Koski 
Cc:
galaxy-dev@lists.bx.psu.edu
Date:
01/30/2012 03:07 PM
Subject:
Re: [galaxy-dev] Error Setting BAM Metadata



On Jan 25, 2012, at 2:43 PM, Liisa Koski wrote:

> Hello, 
> I am trying to upload BAM files (by pasting a URL) to my history(or 
DataLibrary) and get the following error. These are bam files which I had 
previously uploaded with no problems. 
> 
> Traceback (most recent call last):
>  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/jobs/runners/local.py", 
line 126, in run_job
>job_wrapper.finish( stdout, stderr )
>  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/jobs/__init__.py", line 
618, in finish
>dataset.set_meta( overwrite = False )
>  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/model/__init__.py", line 
874, in set_meta
>return self.datatype.set_meta( self, **kwd )
>  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/datatypes/binary.py", 
line 179, in set_meta
>raise Exception, "Error Setting BAM Metadata: %s" % stderr
> Exception: Error Setting BAM Metadata: [bam_header_read] EOF marker is 
absent. The input is probably truncated.
> [bam_header_read] invalid BAM binary header (this is not a BAM file) 
> 
> I ran bamtools on the unix command line to see if there was anything 
wrong with the file(s) but nothing. I tried uploading different bam files 
from other projects and get the same error. 
> 
> I did do an update to the latest release yesterday...if that helps? 

Hi Liisa,

Is this a regular upload via a browser?

--nate

> 
> Thanks in advance, 
> Liisa 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error executing tool: 'fasta'

2012-01-27 Thread Liisa Koski
Hi,
I have tried running (on my local instance) a  number of tools 
(fasta_to_tabular, tandem_repeat_finder, others) on my fasta file but keep 
getting the error Error executing tool: 'fasta' after file execution. In 
the log file I see this

10.1.1.119 - - [27/Jan/2012:15:47:15 -0400] "GET 
/tool_runner?tool_id=fasta2tab HTTP/1.1" 200 - "
http://domain:8080/root/tool_menu"; "Mozilla/5.0 (Windows NT 5.2; WOW64; 
rv:6.0) Gecko/20100101 Firefox/6.0"
galaxy.tools ERROR 2012-01-27 15:47:17,647 Exception caught while 
attempting tool execution:
Traceback (most recent call last):
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 1184, in 
handle_input
_, out_data = self.execute( trans, incoming=params, history=history )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/__init__.py", line 1503, in 
execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File "/Galaxy/galaxy_dist/lib/galaxy/tools/actions/__init__.py", line 
199, in execute
build_fasta_dataset = trans.app.model.HistoryDatasetAssociation.get( 
custom_build_dict[ 'fasta' ] )
KeyError: 'fasta'

The fasta file is ok because I have a development installation and the 
tools work fine there. So there is something wrong with my production 
installation.
I did a diff between the files galaxy_dist/lib/galaxy/tools/__init__.py 
and lib/galaxy/tools/actions/__init__.py but they are the same.

Any help would be much appreciated.

Thanks,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Error Setting BAM Metadata

2012-01-25 Thread Liisa Koski
Hello,
I am trying to upload BAM files (by pasting a URL) to my history(or 
DataLibrary) and get the following error. These are bam files which I had 
previously uploaded with no problems.

Traceback (most recent call last):
  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/jobs/runners/local.py", 
line 126, in run_job
job_wrapper.finish( stdout, stderr )
  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/jobs/__init__.py", line 
618, in finish
dataset.set_meta( overwrite = False )
  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/model/__init__.py", line 
874, in set_meta
return self.datatype.set_meta( self, **kwd )
  File "/doolittle/Galaxy/galaxy_dist/lib/galaxy/datatypes/binary.py", 
line 179, in set_meta
raise Exception, "Error Setting BAM Metadata: %s" % stderr
Exception: Error Setting BAM Metadata: [bam_header_read] EOF marker is 
absent. The input is probably truncated.
[bam_header_read] invalid BAM binary header (this is not a BAM file)

I ran bamtools on the unix command line to see if there was anything wrong 
with the file(s) but nothing. I tried uploading different bam files from 
other projects and get the same error. 

I did do an update to the latest release yesterday...if that helps?

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Problem uploading files from filesystem paths

2011-12-20 Thread Liisa Koski
Hi,
I'm trying to upload data to Data Libraries from filesystems paths as 
Admin. I get the following error. 
Any ideas?

Thanks in advance,
Liisa


Traceback (most recent call last): File 
"/data/Galaxy/galaxy-dist/tools/data_source/upload.py", line 394, in 
__main__() File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py", 
line 386, in __main__ add_file( dataset, registry, js 
Job Standard Error 
Traceback (most recent call last):
  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py", line 394, 
in 
__main__()
  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py", line 386, 
in __main__
add_file( dataset, registry, json_file, output_path )
  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py", line 300, 
in add_file
if datatype.dataset_content_needs_grooming( dataset.path ):
  File "/data/Galaxy/galaxy-dist/lib/galaxy/datatypes/binary.py", line 79, 
in dataset_content_needs_grooming
version = self._get_samtools_version()
  File "/data/Galaxy/galaxy-dist/lib/galaxy/datatypes/binary.py", line 63, 
in _get_samtools_version
output = subprocess.Popen( [ 'samtools' ], stderr=subprocess.PIPE, 
stdout=subprocess.PIPE ).communicate()[1]
  File "/usr/lib64/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
  File "/usr/lib64/python2.6/subprocess.py", line 1139, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Divide FASTQ file into paired and unpaired reads (version 0.0.4) - naming suffix not recognized

2011-12-12 Thread Liisa Koski
Hi Peter,
Thanks! The fix worked with the new Illumina format. 
Your help is much appreciated :)

Cheers,
Liisa

Liisa Koski 
Bioinformatics Programmer




From:
Peter Cock 
To:
Liisa Koski 
Cc:
galaxy-dev 
Date:
2011-12-12 03:47
Subject:
Re: [galaxy-dev] Divide FASTQ file into paired and unpaired reads (version 
0.0.4) - naming suffix not recognized





On Friday, December 9, 2011, Peter Cock  wrote:
> On Fri, Dec 9, 2011 at 5:05 PM, Liisa Koski  wrote:
>>
>> Thanks Peter! That would be great! Let me know when you want me to test 
it :)
>> Cheers,
>> Liisa
>
> The revised code is here on bitbucket (on the branch "tools"), you'll
> need fastq_paired_unpaired.py and fastq_paired_unpaired.xml v0.0.5:

Try this (looks like I had the wrong link on Friday), sorry:

https://bitbucket.org/peterjc/galaxy-central/src/a25f7920a1e5/tools/fastq

Peter 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Divide FASTQ file into paired and unpaired reads (version 0.0.4) - naming suffix not recognized

2011-12-09 Thread Liisa Koski
Thanks Peter! That would be great! Let me know when you want me to test it 
:)
Cheers,
Liisa

Liisa Koski 
Bioinformatics Programmer
Phone: 450-358-2621 x104, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca.



From:
Peter Cock 
To:
Liisa Koski 
Cc:
galaxy-dev 
Date:
2011-12-09 11:55
Subject:
Re: [galaxy-dev] Divide FASTQ file into paired and unpaired reads (version 
0.0.4) - naming suffix not recognized



On Fri, Dec 9, 2011 at 4:49 PM, Liisa Koski  
wrote:
> Hi ,
> Looks like the tool Divide FASTQ file into paired and unpaired reads is 
not
> recognizing my paired-end naming suffix
>
> @HWI-ST916:79:D04M5ACXX:1:1101:1:100326 1:N:0:TGNCCA
> @HWI-ST916:79:D04M5ACXX:1:1101:1:100326 2:N:0:TGNCCA
>
> Is there a patch for this or another tool/way I can divide this fastq 
file?
>
> Thanks in advance,
> Liisa

That's a new Illumina FASTQ file isn't it, where they went and changed
the read naming? That annoyed quite a few people...

Presumably you're talking about my tool of that name on the Galaxy
Tool Shed? I can take a look at this now if you agree to test it ;)

Peter

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Divide FASTQ file into paired and unpaired reads (version 0.0.4) - naming suffix not recognized

2011-12-09 Thread Liisa Koski
Hi ,
Looks like the tool Divide FASTQ file into paired and unpaired reads is 
not recognizing my paired-end naming suffix

@HWI-ST916:79:D04M5ACXX:1:1101:1:100326 1:N:0:TGNCCA
@HWI-ST916:79:D04M5ACXX:1:1101:1:100326 2:N:0:TGNCCA

Is there a patch for this or another tool/way I can divide this fastq 
file?

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Can't view data files from published histories, only imported histories

2011-12-06 Thread Liisa Koski
Thanks Jeremy, that changeset fixed the issue :)

Cheers,
Liisa




From:
Jeremy Goecks 
To:
Liisa Koski 
Cc:

Date:
2011-11-30 18:07
Subject:
Re: [galaxy-dev] Can't view data files from published histories, only 
imported histories



Liisa,

Updating FastQC won't solve this issue. You'll need to manually update 
galaxy-dist with this changeset to see the graphs:

https://bitbucket.org/galaxy/galaxy-central/changeset/c8493a61bbea

Otherwise, you can wait until the next galaxy-dist to get this changeset.

Best,
J.



On Nov 30, 2011, at 2:39 PM, Liisa Koski wrote:

Hi Jeremy, 
I updated to the latest galaxy-dist release and can now see the boxplots 
(png) files from published histories. I can see the FASTQC (html) reports 
too but the individual graphs within the report are missing. I will update 
fastqc itself and see if that makes a difference. 

Thanks, 
Liisa 

Liisa Koski 
Bioinformatics Programmer
Phone: 518-309-3079, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA 
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company 
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca. 


From: 
Jeremy Goecks  
To: 
Liisa Koski  
Cc: 
 
Date: 
2011-11-28 14:46 
Subject: 
Re: [galaxy-dev] Can't view data files from published histories, only 
imported histories




Liisa, 

I'm not able to reproduce your issue on our public server. For instance, 
here's a history that includes both FastQC output and a boxplot; both 
display correctly: 

http://main.g2.bx.psu.edu/u/jeremy/h/unnamed-history-2 

(Note that I just committed a fix to galaxy-central so that the images in 
the FastQC output show up correctly.) 

Some questions that might help us figure out the problem: 
*can you reproduce your problem on a public Galaxy instance, such as main 
or test? 
*is your Galaxy instance up to date? 
*is there something unusual about the datasets that you're using, e.g. are 
they imported on non-standard in some way? 

J. 

On Nov 28, 2011, at 1:35 PM, Liisa Koski wrote: 

Hi Jeremy, 
I'm trying to view boxplots (png) and FastQC (html) output. I  can view 
other output types like tabular in the published histories, but not png or 
html. 

Thanks, 
Liisa 

Liisa Koski 
Bioinformatics Programmer
Phone: 518-309-3079, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA 
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company 
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca. 

From: 
Jeremy Goecks  
To: 
Liisa Koski  
Cc: 
 
Date: 
2011-11-28 13:23 
Subject: 
Re: [galaxy-dev] Can't view data files

Re: [galaxy-dev] Can't view data files from published histories, only imported histories

2011-11-30 Thread Liisa Koski
Hi Jeremy,
I updated to the latest galaxy-dist release and can now see the boxplots 
(png) files from published histories. I can see the FASTQC (html) reports 
too but the individual graphs within the report are missing. I will update 
fastqc itself and see if that makes a difference.

Thanks,
Liisa

Liisa Koski 
Bioinformatics Programmer
Phone: 518-309-3079, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca.



From:
Jeremy Goecks 
To:
Liisa Koski 
Cc:

Date:
2011-11-28 14:46
Subject:
Re: [galaxy-dev] Can't view data files from published histories, only 
imported histories



Liisa,

I'm not able to reproduce your issue on our public server. For instance, 
here's a history that includes both FastQC output and a boxplot; both 
display correctly:

http://main.g2.bx.psu.edu/u/jeremy/h/unnamed-history-2

(Note that I just committed a fix to galaxy-central so that the images in 
the FastQC output show up correctly.)

Some questions that might help us figure out the problem:
*can you reproduce your problem on a public Galaxy instance, such as main 
or test?
*is your Galaxy instance up to date?
*is there something unusual about the datasets that you're using, e.g. are 
they imported on non-standard in some way?

J.

On Nov 28, 2011, at 1:35 PM, Liisa Koski wrote:

Hi Jeremy, 
I'm trying to view boxplots (png) and FastQC (html) output. I  can view 
other output types like tabular in the published histories, but not png or 
html. 

Thanks, 
Liisa 

Liisa Koski 
Bioinformatics Programmer
Phone: 518-309-3079, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA 
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company 
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca. 


From: 
Jeremy Goecks  
To: 
Liisa Koski  
Cc: 
 
Date: 
2011-11-28 13:23 
Subject: 
Re: [galaxy-dev] Can't view data files from published histories, only 
imported histories




Liisa, 

This functionality works fine in some instances, e.g. for this history: 

http://main.g2.bx.psu.edu/u/cartman/h/repeats 

I'd guess that it's related to the particular dataset type. What type of 
dataset are you trying to view when you see this error? Can you view any 
datasets from the history? 

Thanks, 
J. 

On Nov 28, 2011, at 1:11 PM, Liisa Koski wrote: 

Hi, 
I found a weird bug. I am trying to view data files by clicking on the 
'eye' icon from a published history on my local galaxy installation. When 
I click on th

Re: [galaxy-dev] Lost my Data Libraries when I updated to the lastest galaxy-dist release?

2011-11-30 Thread Liisa Koski
Sorry! They are back! I was looking at the wrong installation :(

Thanks,
Liisa




From:
Greg Von Kuster 
To:
Liisa Koski 
Cc:
galaxy-dev 
Date:
2011-11-30 13:45
Subject:
Re: [galaxy-dev] Lost my Data Libraries when I updated to the lastest 
galaxy-dist release?



Lisa,

What release did you update from?  Did you have any access restrictions on 
the data libraries?  If so, are you logged in as a user that has those 
access privileges?  Simply updating you Galaxy code base should not have 
caused this issue for you.


On Nov 30, 2011, at 1:38 PM, Liisa Koski wrote:

Hi, 
I lost my Data Libraries when I updated to the latest galaxy-dist release 
:( What would be the best way to go about getting them back? 

Thanks in advance, 
Liisa 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/

Greg Von Kuster
Galaxy Development Team
g...@bx.psu.edu



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Lost my Data Libraries when I updated to the lastest galaxy-dist release?

2011-11-30 Thread Liisa Koski
Hi,
I lost my Data Libraries when I updated to the latest galaxy-dist release 
:( What would be the best way to go about getting them back?

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Can't view data files from published histories, only imported histories

2011-11-28 Thread Liisa Koski
Hi Jeremy,
I'm trying to view boxplots (png) and FastQC (html) output. I  can view 
other output types like tabular in the published histories, but not png or 
html.

Thanks,
Liisa

Liisa Koski 
Bioinformatics Programmer
Phone: 518-309-3079, E-Mail: liisa.ko...@dnalandmarks.ca
Postal Address: DNA LandMarks Inc, St-jean-sur-Richelieu, J3B 6X3 Québec 
CANADA
DNA LandMarks - une compagnie de BASF Plant Science / a BASF Plant Science 
company
Confidentiality notice: The information contained in this e-mail is 
confidential and may be the subject of legal professional privilege. It is 
intended for the authorized use of the individual or entity addressed. If 
the receiver or reader of this message is not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution or use of 
the contents of this message is prohibited. If this email is received in 
error, please accept our apologies, delete all copies from your system, 
and notify us at supp...@dnalandmarks.ca.Confidentialité: 
L'information contenue dans ce courriel est confidentielle et peut être 
assujettie aux règles concernant le secret professionel. L'information 
contenue dans ce courriel est autorisée uniquement pour l'individu ou 
l'entité légale adressée. Si le récipiendaire ou le lecteur de ce message 
n'est pas celui ou celle prévue, vous êtes tenu de ne pas présenter, 
copier, distribuer ou utiliser le contenu de ce message. Si ce courriel 
est reçu par erreur, veuillez nous en excuser, veuillez détruire toutes 
copies de votre système nous informer à supp...@dnalandmarks.ca.



From:
Jeremy Goecks 
To:
Liisa Koski 
Cc:

Date:
2011-11-28 13:23
Subject:
Re: [galaxy-dev] Can't view data files from published histories, only 
imported histories



Liisa,

This functionality works fine in some instances, e.g. for this history:

http://main.g2.bx.psu.edu/u/cartman/h/repeats

I'd guess that it's related to the particular dataset type. What type of 
dataset are you trying to view when you see this error? Can you view any 
datasets from the history?

Thanks,
J.

On Nov 28, 2011, at 1:11 PM, Liisa Koski wrote:

Hi, 
I found a weird bug. I am trying to view data files by clicking on the 
'eye' icon from a published history on my local galaxy installation. When 
I click on the eye I get a 'Server Error' and in the log file I get the 
following. 

Error - : global name 'data' is not defined 
URL: http://domain:8080/u/user/d/8bdb720fee635874 
File 
'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/exceptions/errormiddleware.py', 
line 143 in __call__ 
  app_iter = self.application(environ, start_response) 
File 'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/recursive.py', line 80 in 
__call__ 
  return self.application(environ, start_response) 
File 'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/httpexceptions.py', line 
632 in __call__ 
  return self.application(environ, start_response) 
File 'galaxy_dist/lib/galaxy/web/framework/base.py', line 160 in __call__ 
  body = method( trans, **kwargs ) 
File 'galaxy_dist/lib/galaxy/web/controllers/dataset.py', line 693 in 
display_by_username_and_slug 
  trans.response.set_content_type( data.get_mime() ) 
NameError: global name 'data' is not defined 

If I import the history into my own history pane I can click on the eye 
and see the data with no errors. 

Any ideas how I can see the data from the published histories? 

Thanks, 
Liisa 

Liisa Koski 
Bioinformatics Programmer
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can't view data files from published histories, only imported histories

2011-11-28 Thread Liisa Koski
Hi,
I found a weird bug. I am trying to view data files by clicking on the 
'eye' icon from a published history on my local galaxy installation. When 
I click on the eye I get a 'Server Error' and in the log file I get the 
following.

Error - : global name 'data' is not defined
URL: http://domain:8080/u/user/d/8bdb720fee635874
File 
'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/exceptions/errormiddleware.py', 
line 143 in __call__
  app_iter = self.application(environ, start_response)
File 'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/recursive.py', line 80 in 
__call__
  return self.application(environ, start_response)
File 'galaxy_dist/eggs/Paste-1.6-py2.6.egg/paste/httpexceptions.py', line 
632 in __call__
  return self.application(environ, start_response)
File 'galaxy_dist/lib/galaxy/web/framework/base.py', line 160 in __call__
  body = method( trans, **kwargs )
File 'galaxy_dist/lib/galaxy/web/controllers/dataset.py', line 693 in 
display_by_username_and_slug
  trans.response.set_content_type( data.get_mime() )
NameError: global name 'data' is not defined

If I import the history into my own history pane I can click on the eye 
and see the data with no errors.

Any ideas how I can see the data from the published histories?

Thanks,
Liisa

Liisa Koski 
Bioinformatics Programmer
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Compatible MACS version?

2011-06-03 Thread Liisa Koski
Hi,
I'm trying to use macs in my local Galaxy instance but am having 
difficulties. I have macs 1.4.0rc2 20110214 (Valentine) installed. I get 
macs: error: no such option: --lambdaset
I checked documentation for Galaxy Dependencies but no version is 
mentioned for macs (
https://bitbucket.org/galaxy/galaxy-central/wiki/ToolDependencies).

Thanks in advance for the help,
Liisa
 ___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Import Library Datasets into Histories - automatically imports into current AND new history - Bug?

2011-05-31 Thread Liisa Koski
Hello,
I noticed a change in the newest version. When importing Library Datasets 
into Histories...if you enter a New History name for the destination 
Galaxy will import into the new AND current history. So it is getting 
imported into two histories. Is this a bug? It would really be nice to 
only import a dataset into a new history like before.

Thanks,
Liisa
 ___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] BioMart -> Galaxy returns html

2011-05-18 Thread Liisa Koski
 Hi,
I'm trying to grab data from my local custom BioMart and export it to my 
local Galaxy installation.  I did this by changing the address in the 
biomart.xml file

http://mydomain/biomart/martview"; check_values="false" 
method="get" target="_top">
go to BioMart Central $GALAXY_URL




The file it returns is html. When I change the data type to tabular and 
click on the eye I see the following...



Request Error (invalid_request)

Your request could not be processed. Request could not be 
handled

http://domain.com/scripts/proxy_exception_2.pl"; 
method="post">






 assistance from the 
internet support group.



 

Any help would be much appreciated.
Thanks,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can I run workflow on certain nodes of the cluster?

2011-05-09 Thread Liisa Koski
Is it possible to run a specific workflow only on certain nodes of the 
cluster? Either using the API or by setting something in the config files?

Thanks,
Liisa
 ___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Workflow steps stuck in job queue - since cleanup_datasets.py

2011-05-05 Thread Liisa Koski
Hi,
Yesterday I ran the cleanup_datasets.py scripts as follows..

 Deleting Userless Histories
python cleanup_datasets.py universe_wsgi.ini -d 10 -1

Purging Deleted Histories
python cleanup_datasets.py universe_wsgi.ini -d 10 -2 -r

Purging Deleted Datasets
python cleanup_datasets.py universe_wsgi.ini -d 10 -3 -r

Purging Library Folders
python cleanup_datasets.py universe_wsgi.ini -d 10 -5 -r

Purging Libraries
python cleanup_datasets.py universe_wsgi.ini -d 10 -4 -r

Deleting Datasets / Purging Dataset Instances
python cleanup_datasets.py universe_wsgi.ini -d 10 -6 -r 

This morning I noticed a number of workflows were either stuck at a 
certain step (ie..job running) or the step was grey (waiting in queue) but 
our cluster has free nodes. If I start a new workflow...it completes 
fine...just the 19 histories that were running yesterday are stuck. Did I 
do something wrong with the cleanup. Is there a way to restart these stuck 
histories without having to restart the entire workflow? 

Thanks in advance,
Liisa

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] sqlalchemy Timeout Error

2011-04-21 Thread Liisa Koski
Hi,
We are running many NGS workflows at the same time on our local instance 
of Galaxy. They are crashing with the following error

 Error - : QueuePool limit of size 5 
overflow 10 reached, connection timed out, timeout 30

In  the universe_wsgi.ini we made the following adjustments:

# If the server logs errors about not having enough database pool 
connections,
# you will want to increase these values, or consider running more Galaxy
# processes.
database_engine_option_pool_size = 50   # this used to be 5
database_engine_option_max_overflow = 100 # this used to be 100


Those numbers were pulled out of a hat so I wanted to make sure what we 
were doing was correct. Is there a limit on the values? It doesn't appear 
to be crashing anymore but I still want to make sure.

Thanks,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] update probelm...

2011-04-11 Thread Liisa Koski
Hi I just tried to update my local installation of Galaxy but accidentally 
hit Ctr-C when it asked me a question

 added 285 changesets with 649 changes to 354 files
 local changed static/scripts/packed/jquery.jstore.js which remote deleted
use (c)hanged version or (d)elete? interrupted!

When I tried to do the update again...it says 'no changes found'

But...after restarting my instance...I see that the changes did not get 
implemented.

Is there a way to force this?

Thanks in advance,
Liisa___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Can't save BAM file from Galaxy

2011-03-22 Thread Liisa Koski
Thanks Nate! I set 'debug = False'and now I have no problem downloading my 
BAM/SAM files.

Cheers,
Liisa
 



From:
Nate Coraor 
To:
Liisa Koski 
Cc:
galaxy-...@bx.psu.edu
Date:
2011-03-22 12:06
Subject:
Re: [galaxy-dev] Can't save BAM file from Galaxy



Liisa Koski wrote:
> Hello,
> I have a local instance of galaxy and after successfully running an NGS 
> analysis I am trying to save my BAM file to my local machine. When I 
click 
> on the save icon my Galaxy instance crashes with the following error. 
This 
> also happens when I try to save the SAM file. It does not happen when I 
> try to save a txt file. 
> 
> python: ./Modules/cStringIO.c:419: O_cwrite: Assertion `oself->pos + l < 

> 2147483647' failed.
> ./run.sh: line 48: 19187 Aborted python 
> ./scripts/paster.py serve universe_wsgi.ini $@
> 
> Something to do with base positions?

Hi Liisa,

I'm not completely certain, but it may just be the debugging modules
that are enabled by default trying to load the entire bam/sam file into
memory before sending it.  Try restarting your Galaxy server after
setting 'debug = False' in universe_wsgi.ini and see if this makes a
difference.

--nate

> 
> Thanks in advance for any help,
> Liisa
> 

> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Can't save BAM file from Galaxy

2011-03-22 Thread Liisa Koski
Hello,
I have a local instance of galaxy and after successfully running an NGS 
analysis I am trying to save my BAM file to my local machine. When I click 
on the save icon my Galaxy instance crashes with the following error. This 
also happens when I try to save the SAM file. It does not happen when I 
try to save a txt file. 

python: ./Modules/cStringIO.c:419: O_cwrite: Assertion `oself->pos + l < 
2147483647' failed.
./run.sh: line 48: 19187 Aborted python 
./scripts/paster.py serve universe_wsgi.ini $@

Something to do with base positions?

Thanks in advance for any help,
Liisa
 ___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Send data to GBrowse

2011-02-11 Thread Liisa Koski
I have figured out how to send data from my local Galaxy to my local 
GBrowse. Thanks for the help! 

Now I would also like to go in the other direction and send data from 
GBrowse to Galaxy.

galaxy incoming = http://localhost/cgi-bin/gbrowse

When I export data to Galaxy I get the main page of galaxy but do not see 
any data.
The url in my browser is 
http://localhost:8080/?URL=http://localhost/cgi-bin/gbrowse

Any ideas as to why I can't see the actual data? Shoudn't the url contain 
ref;start;end;type?

Thanks for your help,
Liisa
 



From:
Nicki Gray 
To:
Liisa Koski 
Cc:
"galaxy-dev@lists.bx.psu.edu" 
Date:
2011-02-10 11:40
Subject:
Re: [galaxy-dev] Send data to GBrowse




Hi Liisa

When setting this up on our own local instance to get bigwig files to 
display in Gbrowse we edited:

1. universe_wsgi.ini
2. tool-data/shared/gbrowse/gbrowse_build_sites.txt
3.  datatypes_conf.xml

so that it includes



4. and in display_applications/gbrowse/ make sure you have the relevant 
xml files

eg gbrowse_wig.xml, gbrowse_bigwig.xml

Nicki Gray
MRC Molecular Haematology Unit


On 7 Feb 2011, at 19:03, Liisa Koski wrote:

Hi, 
I have just installed Galaxy and am trying to configure it to allow 
display of data on my local installation of GBrowse. 

I have edited tool-data/shared/gbrowse/gbrowse_build_sites.txt (species 
http://domain.ca/cgi-bin/gbrowse/species   species) 
  
As well as universe_wsgi.ini (gbrowse_display_sites = species) 

However I do not see a link in history items after I stop and start the 
server. 

Is there something else I'm missing? 

Thanks in advance for your help, 
Liisa

___
To manage your subscriptions to this and other Galaxy lists, please use the
interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Send data to GBrowse

2011-02-07 Thread Liisa Koski
Hi,
I have just installed Galaxy and am trying to configure it to allow 
display of data on my local installation of GBrowse.

I have edited tool-data/shared/gbrowse/gbrowse_build_sites.txt (species 
http://domain.ca/cgi-bin/gbrowse/species   species)
 
As well as universe_wsgi.ini (gbrowse_display_sites = species)

However I do not see a link in history items after I stop and start the 
server.

Is there something else I'm missing?

Thanks in advance for your help,
Liisa___
galaxy-dev mailing list
galaxy-dev@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-dev