[galaxy-dev] running two Galaxy installations on one server - was: Re: Can not clean my galaxy datasets

2011-12-21 Thread Hans-Rudolf Hotz

Hi Liram

I was suggesting: instead of copying the existing Galaxy directory into 
another location. Download and install a new Galaxy installation in a 
separate location on your file system.


for the installation procedure, see:

http://wiki.g2.bx.psu.edu/Admin/Get%20Galaxy

all you need to do in the new installation is changing the port number. 
It will create its own file system and by default it will use its own 
SQLite database. Hence, there should be no interference with your 
existing, production Galaxy installation.


Regards, Hans


On 12/21/2011 09:16 AM, liram_va...@agilent.com wrote:

Hi Hans,

Thank you for your quick reply.

Sorry, But I'm a "rookie" Galaxy user,
So may you pls explain me in details the meaning by " running a second Galaxy server 
in parallel using a fresh check-out "?
Thank a lot for your assistance!

Liram

-Original Message-
From: Hans-Rudolf Hotz [mailto:h...@fmi.ch]
Sent: Monday, December 19, 2011 7:01 PM
To: VARDI,LIRAM (A-Labs,ex1)
Cc: Jennifer Jackson; Galaxy Dev; BEN-DOR,AMIR (A-Labs,ex1)
Subject: Re: [galaxy-dev] Can not clean my galaxy datasets

Hi Liram

This sounds like the two Galaxy server use the same (PostgreSQL/MySQL) database.

Before copying your existing Galaxy server, you might wanna start with running 
a second Galaxy server in parallel using a fresh check-out.

Regards, Hans


Have you tried to run a

On 12/19/2011 04:08 PM, Jennifer Jackson wrote:



On 12/19/11 6:15 AM, liram_va...@agilent.com wrote:

Hi Jennifer,

Thank you again for the quick reply in my last issue.

I have another Galaxy problem and I hope you will be able to assist.
Because I am developing my Galaxy server while there are members in
my group that are using this Galaxy instance in parallel, I am trying
to create another Galaxy process (instance), which will be used only
for testing...

Therefore, I created a copy of Galaxy directory in other location
(let
say: /galaxy-dist_test)
And changed the listening port in "universe_wsgi.ini" to another port
(8081).
Then, I tried to run both of Galaxy instances in parallel.
(One is listening on port 8080 and the other is on 8081)

The problem is occurred when I send a job for a run in one of the
instance (let say: the one that run on 8080), Then, It cause to the
another Instance (the one on port 8081) to collapse (And the history
is disappeared...)

What is the problem?
How can I solve it?
How can I create two different and self-contained galaxy servers on
the same computer?

I tried also to use the instructions :
http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%
20Scaling

But it was not helpful to my problem...

Thank you a lot!
Liram

-Original Message-
From: Jennifer Jackson [mailto:j...@bx.psu.edu]
Sent: Tuesday, November 29, 2011 4:42 PM
To: VARDI,LIRAM (A-Labs,ex1)
Cc: galaxy-...@bx.psu.edu
Subject: Re: [galaxy-dev] Can not clean my galaxy datasets

Hello Liram,

Perhaps the allow_user_dataset_purge option has not been set to True
in universe_wsgi.ini?

Please see this wiki for details:
http://wiki.g2.bx.psu.edu/Admin/Disk%20Quotas#Quotas

Hopefully this helps, but please let us know if you need more
assistance,

Best,

Jen
Galaxy team

On 11/29/11 5:44 AM, liram_va...@agilent.com wrote:

Hello,

My name is Liram Vardi and I'm using local Galaxy instance.

Anyway, I can't clean my deleted datasets from my own locally disk.

When I am trying to delete dataset, as was explained in
http://wiki.g2.bx.psu.edu/Learn/Managing%20Datasets#Actions,

First, I used the "delete 'X' icon" near the dataset to delete the
dataset,

Then, when I go to "Options ->  Show Deleted Datasets", I can see my
"deleted dataset" on the list

with the note: /"This dataset has been deleted. Click _here_ to
undelete it"//*but I don't get the also the option "or _here_ to
immediately remove it from disk."*/.

Also, when I use the option "Options ->  Purge Deleted Datasets", I'm
getting a message "0 datasets have been deleted permanently" and

my deleted dataset still stays in the "Options ->  Show Deleted Datasets"
menu list.

I also tried to clear the history, but when I'm doing that, the
"using X Mb" tab on the upper right corner is still not reset.

What is the problem?

Thanks a lot for your help!

Liram



___
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this and other
Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


--
Jennifer Jackson
http://usegalaxy.org
http://galaxyproject.org/wiki/Support




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Exiting a java application without errors

2011-12-21 Thread graham etherington (TSL)
Hi,
I've created a java app tested it from the command line and put it in the
shared/jars folder. It runs for a while and then the history item turns
red. If I view the dataset output though, everything is as should be, i.e.
the expected output from the tool is seen.
If I click on the view error 'bug' this is what I see:

"Tool execution generated the following error message:
21-Dec-2011 17:08:14
org.biojava3.genome.parsers.gff.GFF3Reader read
INFO: Gff.read(): Reading
/home/galaxy/software/galaxy-central/database/files/012/dataset_12450.dat"

This is the output from org.biojava3.genome.parsers.gff and I have no way
of switching that off.
Can anyone suggest a way to handle this output so that it doesn't throw an
error. I need the output from this tool to use as input to other tools.
Many thanks,
Graham


Dr. Graham Etherington
Bioinformatics Support Officer,
The Sainsbury Laboratory,
Norwich Research Park,
Norwich NR4 7UH.
UK
Tel: +44 (0)1603 450601




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Galaxy issues

2011-12-21 Thread David Matthews
HI Nate,

Many thanks for these ideas - our HPC guys are going to try a few things. 
Hopefully we'll nail the problem and be able to report back in case someone 
else has the same issues.


Best Wishes,
David.

__
Dr David A. Matthews

Senior Lecturer in Virology
Room E49
Department of Cellular and Molecular Medicine,
School of Medical Sciences
University Walk,
University of Bristol
Bristol.
BS8 1TD
U.K.

Tel. +44 117 3312058
Fax. +44 117 3312091

d.a.matth...@bristol.ac.uk






On 19 Dec 2011, at 15:56, Nate Coraor wrote:

> On Dec 14, 2011, at 6:13 PM, David Matthews wrote:
> 
>> Hi Guys,
>> 
>> Sorry to be a pain but this seems to be getting worse for us. Here are the 
>> latest tracebacks - any suggestions would be gratefully received!!
> 
> Hi David,
> 
> As the MemoryError indicates, the Galaxy process is running out of memory.  
> debug = False is preferable, actually.  I asked because having debug = True 
> could easily result in the behavior you're seeing.
> 
> The pbs code definitely has a memory leak, I believe within libtorque or 
> pbs_python.  Because of this, I restart my job runner process when it reaches 
> a certain amount of memory usage.  However, this may not be the cause of your 
> errors.  To figure it out, we'll need to know exactly which thread is 
> consuming the memory.  You may want to enable the heartbeat log and look 
> there to see which threads are active.
> 
> The question about the path was in reference to whether these errors occur 
> immediately upon running a tophat job, without any interaction, or if they 
> occur when you try to click to view the job's output, or on some other part 
> of the Galaxy interface.
> 
> Thanks,
> --nate
> 
>> 
>> Cheers
>> David
>> 
>> 
>> 
>>> galaxy.jobs.runners.pbs ERROR 2011-12-13 19:57:57,689 Uncaught exception 
>>> checking jobs
>>> Traceback (most recent call last):
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>  line 338, in monitor
>>>  self.check_watched_items()
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>  line 351, in check_watched_items
>>>  ( failures, statuses ) = self.check_all_jobs()
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>  line 462, in check_all_jobs
>>>  statuses.update( self.convert_statjob_to_bunches( jobs ) )
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
>>>  line 476, in convert_statjob_to_bunches
>>>  statuses[ job.name ] = Bunch( **status )
>>> MemoryError
>>> Unhandled exception in thread started by
>>> Traceback (most recent call last):
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 504, in __bootstrap
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 580, in __bootstrap_inner
>>> MemoryError
>>> Unhandled exception in thread started by >> of >
>>> Traceback (most recent call last):
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 504, in __bootstrap
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 545, in __bootstrap_inner
>>> MemoryError
>>> Unexpected exception in worker  at 0x883acf8>
>>> Traceback (most recent call last):
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>>  line 863, in worker_thread_callback
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>>  line 1037, in 
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>>  line 1056, in process_request_in_thread
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>>  line 1044, in handle_error
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>>> 334, in handle_error
>>> MemoryError
>>> Unhandled exception in thread started by >> of >
>>> Traceback (most recent call last):
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 504, in __bootstrap
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/threading.py", line 
>>> 545, in __bootstrap_inner
>>> MemoryError
>>> 
>>> Exception happened during processing of request from ('xxx.xxx.xxx.xxx', 
>>> 44389)
>>> Traceback (most recent call last):
>>> File 
>>> "/gpfs/cluster/isys/galaxy/Galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py",
>>>  line 1053, in process_request_in_thread
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>>> 322, in finish_request
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>>> 616, in __init__
>>> File "/gpfs/cluster/isys/galaxy/Galaxy/lib/python2.6/SocketServer.py", line 
>>> 657, in setup
>>> MemoryErro

Re: [galaxy-dev] Can not clean my galaxy datasets

2011-12-21 Thread liram_vardi
Hi Hans,

Thank you for your quick reply.

Sorry, But I'm a "rookie" Galaxy user, 
So may you pls explain me in details the meaning by " running a second Galaxy 
server in parallel using a fresh check-out "?
Thank a lot for your assistance!

Liram

-Original Message-
From: Hans-Rudolf Hotz [mailto:h...@fmi.ch] 
Sent: Monday, December 19, 2011 7:01 PM
To: VARDI,LIRAM (A-Labs,ex1)
Cc: Jennifer Jackson; Galaxy Dev; BEN-DOR,AMIR (A-Labs,ex1)
Subject: Re: [galaxy-dev] Can not clean my galaxy datasets

Hi Liram

This sounds like the two Galaxy server use the same (PostgreSQL/MySQL) database.

Before copying your existing Galaxy server, you might wanna start with running 
a second Galaxy server in parallel using a fresh check-out.

Regards, Hans


Have you tried to run a

On 12/19/2011 04:08 PM, Jennifer Jackson wrote:
>
>
> On 12/19/11 6:15 AM, liram_va...@agilent.com wrote:
>> Hi Jennifer,
>>
>> Thank you again for the quick reply in my last issue.
>>
>> I have another Galaxy problem and I hope you will be able to assist.
>> Because I am developing my Galaxy server while there are members in 
>> my group that are using this Galaxy instance in parallel, I am trying 
>> to create another Galaxy process (instance), which will be used only 
>> for testing...
>>
>> Therefore, I created a copy of Galaxy directory in other location 
>> (let
>> say: /galaxy-dist_test)
>> And changed the listening port in "universe_wsgi.ini" to another port 
>> (8081).
>> Then, I tried to run both of Galaxy instances in parallel.
>> (One is listening on port 8080 and the other is on 8081)
>>
>> The problem is occurred when I send a job for a run in one of the 
>> instance (let say: the one that run on 8080), Then, It cause to the 
>> another Instance (the one on port 8081) to collapse (And the history 
>> is disappeared...)
>>
>> What is the problem?
>> How can I solve it?
>> How can I create two different and self-contained galaxy servers on 
>> the same computer?
>>
>> I tried also to use the instructions :
>> http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%
>> 20Scaling
>>
>> But it was not helpful to my problem...
>>
>> Thank you a lot!
>> Liram
>>
>> -Original Message-
>> From: Jennifer Jackson [mailto:j...@bx.psu.edu]
>> Sent: Tuesday, November 29, 2011 4:42 PM
>> To: VARDI,LIRAM (A-Labs,ex1)
>> Cc: galaxy-...@bx.psu.edu
>> Subject: Re: [galaxy-dev] Can not clean my galaxy datasets
>>
>> Hello Liram,
>>
>> Perhaps the allow_user_dataset_purge option has not been set to True 
>> in universe_wsgi.ini?
>>
>> Please see this wiki for details:
>> http://wiki.g2.bx.psu.edu/Admin/Disk%20Quotas#Quotas
>>
>> Hopefully this helps, but please let us know if you need more 
>> assistance,
>>
>> Best,
>>
>> Jen
>> Galaxy team
>>
>> On 11/29/11 5:44 AM, liram_va...@agilent.com wrote:
>>> Hello,
>>>
>>> My name is Liram Vardi and I'm using local Galaxy instance.
>>>
>>> Anyway, I can't clean my deleted datasets from my own locally disk.
>>>
>>> When I am trying to delete dataset, as was explained in 
>>> http://wiki.g2.bx.psu.edu/Learn/Managing%20Datasets#Actions,
>>>
>>> First, I used the "delete 'X' icon" near the dataset to delete the 
>>> dataset,
>>>
>>> Then, when I go to "Options -> Show Deleted Datasets", I can see my 
>>> "deleted dataset" on the list
>>>
>>> with the note: /"This dataset has been deleted. Click _here_ to 
>>> undelete it"//*but I don't get the also the option "or _here_ to 
>>> immediately remove it from disk."*/.
>>>
>>> Also, when I use the option "Options -> Purge Deleted Datasets", I'm 
>>> getting a message "0 datasets have been deleted permanently" and
>>>
>>> my deleted dataset still stays in the "Options -> Show Deleted Datasets"
>>> menu list.
>>>
>>> I also tried to clear the history, but when I'm doing that, the 
>>> "using X Mb" tab on the upper right corner is still not reset.
>>>
>>> What is the problem?
>>>
>>> Thanks a lot for your help!
>>>
>>> Liram
>>>
>>>
>>>
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client. To manage your subscriptions to this and other 
>>> Galaxy lists, please use the interface at:
>>>
>>> http://lists.bx.psu.edu/
>>
>> --
>> Jennifer Jackson
>> http://usegalaxy.org
>> http://galaxyproject.org/wiki/Support
>>
>

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Problem uploading files from filesystem paths

2011-12-21 Thread Makis Ladoukakis

Seems like an error concerning permissions in your galaxy folders. Try:

sudo chmod 777 galaxy-dist/

and then try to upload data again.

To: galaxy-...@bx.psu.edu
From: liisa.ko...@dnalandmarks.ca
Date: Tue, 20 Dec 2011 16:22:02 -0500
Subject: [galaxy-dev] Problem uploading files from filesystem paths

Hi,

I'm trying to upload data to Data Libraries from filesystems
paths as Admin. I get the following error. 

Any ideas?



Thanks in advance,

Liisa





Traceback (most recent call last): File 
"/data/Galaxy/galaxy-dist/tools/data_source/upload.py",
line 394, in __main__() File 
"/data/Galaxy/galaxy-dist/tools/data_source/upload.py",
line 386, in __main__ add_file( dataset, registry, js 

Job Standard Error 

Traceback (most recent call last):

  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py",
line 394, in 

__main__()

  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py",
line 386, in __main__

add_file( dataset, registry, json_file, output_path )

  File "/data/Galaxy/galaxy-dist/tools/data_source/upload.py",
line 300, in add_file

if datatype.dataset_content_needs_grooming( dataset.path
):

  File "/data/Galaxy/galaxy-dist/lib/galaxy/datatypes/binary.py",
line 79, in dataset_content_needs_grooming

version = self._get_samtools_version()

  File "/data/Galaxy/galaxy-dist/lib/galaxy/datatypes/binary.py",
line 63, in _get_samtools_version

output = subprocess.Popen( [ 'samtools' ], stderr=subprocess.PIPE,
stdout=subprocess.PIPE ).communicate()[1]

  File "/usr/lib64/python2.6/subprocess.py", line 633, in
__init__

errread, errwrite)

  File "/usr/lib64/python2.6/subprocess.py", line 1139,
in _execute_child

raise child_exception

OSError: [Errno 13] Permission denied





___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Galaxy Server on Bio-Linux

2011-12-21 Thread Bicak, Mesude
Dear Galaxy Developers,

We work in Professor Dawn Field's group (Molecular Evolution and Bioinformatics 
Research Group) at the NERC Environmental Bioinformatics Centre (NEBC) of the 
Centre for Ecology and Hydrology (CEH) Research Institute based in Oxford.

We develop and distribute Bio-Linux (http://nebc.nerc.ac.uk/tools/bio-linux), 
which is a customised Ubuntu distribution that comes with 500+ bioinformatics 
packages. Within our research group we also provide bioinformatics analysis for 
NERC-funded researchers, and recently started looking into Galaxy as well. It 
didn't take us long to discover its power and we would like to enable Bio-Linux 
users to install, run and maintain the Galaxy server with minimal effort, also 
with the aim to spread the word on Galaxy in Europe! 

Recently we took on a project with Dr. Casey Bergman from University of 
Manchester as the Principal Investigator, to package all the necessary Galaxy 
dependencies for Ubuntu/Bio-Linux. As many pre-requisities are already included 
with Bio-Linux, we are already some way down this path. New packages that we 
create will appear in a Launchpad PPA 
(https://launchpad.net/~nebc/+archive/galaxy).

We will be happy to hear any comments regarding these efforts and we hope that 
this will be a useful resource for all Galaxy users. Once the initial packaging 
is done, we hope to collaborate with Galaxy team in maintaining and improving 
this resource.

Best wishes,
Tim, Soon and Mesude

--
Dr. Mesude Bicak 
Bioinformatician & Bio-Linux Developer

NERC Biomolecular Analysis Facility (NBAF) 
NERC Environmental Bioinformatics Centre (NEBC) 

Molecular Evolution and Bioinformatics Group
Centre for Ecology and Hydrology 
Maclean Building, Benson Lane
Crowmarsh Gifford
Wallingford, Oxfordshire
OX10 8BB

Office Tel: +44 1491 69 2705-- 
This message (and any attachments) is for the recipient only. NERC
is subject to the Freedom of Information Act 2000 and the contents
of this email and any reply you make may be disclosed by NERC unless
it is exempt from release under the Act. Any material supplied to
NERC may be stored in an electronic records management system.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] [query] contribute a data library for standard inclusion in the public Galaxy?

2011-12-21 Thread William Ray

My apologies for the noise - I'm just trying to figure out who to talk to here:

A colleague and I are developing a dataset that we believe will be of 
significant interest to the psychiatric genetics field.  We believe that one of 
the biggest obstacles currently facing the field is the lack of a centralized, 
standardized repository for this data.  We therefore would like, if possible, 
to make the data available through Galaxy as a standard library (we believe 
that the psychiatric genetics field, in particular, is more likely to respond 
positively to, and utilize, a standard universal component, rather than a 
private/shared component).

Is there someone we could talk to about this idea, and the required logistics 
to make it happen?

Many thanks for your time,
William Ray
Chris Bartlett
Nationwide Children's Hospital
The Ohio State University Biophysics Program



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] manage_db.sh upgrade fails after fresh install of Galaxy

2011-12-21 Thread Iry Witham
I have installed a new Galaxy server utilizing the November 18,2011 
distribution on a SLES 11 VM and cannot get it to launch cleanly.  I am running 
postgreSQL v. 8.3.9 to create my database.  Once the database is created and 
running I launch my Galaxy server and within moments all of the PID fail except 
for the reports_webapp.pid.  When I tail the logs I find the following:

tail reports_webapp.log -
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,179 Enabling 
'httpexceptions' middleware
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,194 Enabling 
'recursive' middleware
/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/Paste-1.6-py2.6.egg/paste/exceptions/serial_number_generator.py:11:
 DeprecationWarning: the md5 module is deprecated; use hashlib instead
  import md5
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,272 Enabling 'error' 
middleware
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,277 Enabling 'trans 
logger' middleware
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,277 Enabling 'config' 
middleware
galaxy.webapps.reports.buildapp DEBUG 2011-12-21 14:20:54,294 Enabling 
'x-forwarded-host' middleware
Starting server in PID 20411.
serving on 0.0.0.0:9001 view at http://127.0.0.1:9001

tail web0.log -
galaxy.model.migrate.check DEBUG 2011-12-21 14:20:54,793 psycopg2 egg 
successfully loaded for postgres dialect
Traceback (most recent call last):
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/web/buildapp.py", 
line 82, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File "/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/app.py", 
line 39, in __init__
create_or_verify_database( db_url, kwargs.get( 'global_conf', {} ).get( 
'__file__', None ), self.config.database_engine_options )
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/model/migrate/check.py",
 line 105, in create_or_verify_database
% ( db_schema.version, migrate_repository.versions.latest, config_arg ) )
Exception: Your database has version '38' but this code expects version '85'.  
Please backup your database and then migrate the schema by running 'sh 
manage_db.sh -c ./universe_wsgi.webapp.ini upgrade'.
Removing PID file web0.pid

tail runner0.log -
galaxy.model.migrate.check DEBUG 2011-12-21 14:20:54,789 psycopg2 egg 
successfully loaded for postgres dialect
Traceback (most recent call last):
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/web/buildapp.py", 
line 82, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File "/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/app.py", 
line 39, in __init__
create_or_verify_database( db_url, kwargs.get( 'global_conf', {} ).get( 
'__file__', None ), self.config.database_engine_options )
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/lib/galaxy/model/migrate/check.py",
 line 105, in create_or_verify_database
% ( db_schema.version, migrate_repository.versions.latest, config_arg ) )
Exception: Your database has version '38' but this code expects version '85'.  
Please backup your database and then migrate the schema by running 'sh 
manage_db.sh -c ./universe_wsgi.runner.ini upgrade'.
Removing PID file runner0.pid


sh manage_db.sh upgrade -

38 -> 39...

Migration script to add a synopsis column to the library table.

Traceback (most recent call last):
  File "./scripts/manage_db.py", line 63, in 
main( repository=repo, url=db_url )
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/shell.py",
 line 150, in main
ret = command_func(**kwargs)
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/api.py",
 line 221, in upgrade
return _migrate(url, repository, version, upgrade=True, err=err, **opts)
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/api.py",
 line 349, in _migrate
schema.runchange(ver, change, changeset.step)
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/schema.py",
 line 184, in runchange
change.run(self.engine, step)
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/versioning/script/py.py",
 line 101, in run
func()
  File 
"lib/galaxy/model/migrate/versions/0039_add_synopsis_column_to_library_table.py",
 line 20, in upgrade
c.create( Library_table )
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/sqlalchemy_migrate-0.5.4-py2.6.egg/migrate/changeset/schema.py",
 line 365, in create
engine._run_visitor(visitorcallable, self, *args, **kwargs)
  File 
"/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/engine/base.py",
 line 115