[galaxy-dev] Cleanest way to disable job-runs during a long cluster maintenace window?

2014-05-01 Thread Curtis Hendrickson (Campus)
Folks

The cluster under our galaxy server will be down for a WEEK, so it can be 
relocated.
The fileserver that hosts the galaxy datasets, the galaxy server and the galaxy 
database will stay up.

What's the most elegant way to disable people's ability to run new jobs, 
without blocking them from browsing existing histories and downloading/viewing 
their data?

Our configuration:
External Auth
universe_wsgi.ini:use_remote_user = True
Job runner: SGE
universe_wsgi.ini:start_job_runners = drmaa
and the new_file_path and job_working_directories are on a 
fileserver that will be down.
grep -H /scratch/share/galaxy universe_wsgi.ini
universe_wsgi.ini:file_path = /scratch/share/galaxy/staging  # available
universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp  # down
universe_wsgi.ini:job_working_directory = 
/scratch/share/galaxy/job_working_directory # down

We'd rather something nicer than just letting the jobs go to the queue and 
never get run.

Regards,
Curtis

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Cleanest way to disable job-runs during a long cluster maintenace window?

2014-05-01 Thread Peter Cock
Hi Curtis,

This used to be easy - there was a setting on the admin pages to lock
new job submission, which I would use before a planned shutdown.

See https://trello.com/c/BTDaHy9m/1269-re-institute-job-lock-feature
(and vote on it to help prioritize the issue).

Right now I'm not sure if there is an easy alternative :(

Peter


On Thu, May 1, 2014 at 6:45 PM, Curtis Hendrickson (Campus)
 wrote:
> Folks
>
>
>
> The cluster under our galaxy server will be down for a WEEK, so it can be
> relocated.
>
> The fileserver that hosts the galaxy datasets, the galaxy server and the
> galaxy database will stay up.
>
>
>
> What’s the most elegant way to disable people’s ability to run new jobs,
> without blocking them from browsing existing histories and
> downloading/viewing their data?
>
>
>
> Our configuration:
>
> External Auth
>
> universe_wsgi.ini:use_remote_user = True
>
> Job runner: SGE
>
> universe_wsgi.ini:start_job_runners = drmaa
>
> and the new_file_path and job_working_directories are on a
> fileserver that will be down.
>
> grep -H /scratch/share/galaxy universe_wsgi.ini
>
> universe_wsgi.ini:file_path = /scratch/share/galaxy/staging  # available
>
> universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp  # down
>
> universe_wsgi.ini:job_working_directory =
> /scratch/share/galaxy/job_working_directory # down
>
>
>
> We’d rather something nicer than just letting the jobs go to the queue and
> never get run.
>
>
>
> Regards,
>
> Curtis
>
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] unable to purge deleted datasets

2014-05-01 Thread Nate Coraor
On Sat, Apr 26, 2014 at 12:52 AM, Milad Bastami  wrote:

>
> Hi all,
> Using two commands below, I am unable to purge deleted datasets (from a
> local instance of galaxy installed on ubuntu). I have deleted some datasets
> from the history panel and I want to purge only datasets not the history
> itself.
>
>
> *commands:*
> *1. python scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini
> -d 0 -6 -r  *
> *2. python scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini
> -d 0 -3 -r *
> *results:*
> Purged 0 datasets
> Freed disk space:  0
>
> I can't figure out what's wrong. Maybe a privilege problem or something
> else?? Any help would be most appreciated.
>

Hi Milad,

There's a simpler way to selectively purge datasets. If
`allow_user_dataset_purge = True` in universe_wsgi.ini, you can use the
"Include deleted datasets" history option to make those datasets visible,
then use the "immediately remove from disk" link to purge the file.

That said, if the process you've used is still not removing any datasets
from disk, you may need to ensure that there are no undeleted instances of
the datasets in question. Are those dataset shared or copied between
histories?

--nate


>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Cleanest way to disable job-runs during a long cluster maintenace window?

2014-05-01 Thread Curtis Hendrickson (Campus)
Peter, 

Thanks for the quick reply. Saved me a lot of google/grep time. 

Here's what I did to work-around (added it to the card as a comment). Anyone 
got a more thorough/elegant solution? 

Regards,
Curtis


Workaround:

1. Backup migrated_tools_conf.xml, shed_tool_conf.xml and tool_conf.xml
2. edit each to remove all sections
3. edit static/welcome.html to notify users with a big orange announcement

That gives you an empty tool pane, though it doesn't prevent anyone from 
hitting "re-run" on an existing dataset, or doing something through the API.


-Original Message-
From: Peter Cock [mailto:p.j.a.c...@googlemail.com] 
Sent: Thursday, May 01, 2014 1:03 PM
To: Curtis Hendrickson (Campus)
Cc: galaxy-dev PSU list-serv (galaxy-dev@lists.bx.psu.edu)
Subject: Re: [galaxy-dev] Cleanest way to disable job-runs during a long 
cluster maintenace window?

Hi Curtis,

This used to be easy - there was a setting on the admin pages to lock new job 
submission, which I would use before a planned shutdown.

See https://trello.com/c/BTDaHy9m/1269-re-institute-job-lock-feature
(and vote on it to help prioritize the issue).

Right now I'm not sure if there is an easy alternative :(

Peter


On Thu, May 1, 2014 at 6:45 PM, Curtis Hendrickson (Campus)  
wrote:
> Folks
>
>
>
> The cluster under our galaxy server will be down for a WEEK, so it can 
> be relocated.
>
> The fileserver that hosts the galaxy datasets, the galaxy server and 
> the galaxy database will stay up.
>
>
>
> What’s the most elegant way to disable people’s ability to run new 
> jobs, without blocking them from browsing existing histories and 
> downloading/viewing their data?
>
>
>
> Our configuration:
>
> External Auth
>
> universe_wsgi.ini:use_remote_user = 
> True
>
> Job runner: SGE
>
> universe_wsgi.ini:start_job_runners = 
> drmaa
>
> and the new_file_path and job_working_directories are 
> on a fileserver that will be down.
>
> grep -H /scratch/share/galaxy universe_wsgi.ini
>
> universe_wsgi.ini:file_path = /scratch/share/galaxy/staging  # 
> available
>
> universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp  # down
>
> universe_wsgi.ini:job_working_directory = 
> /scratch/share/galaxy/job_working_directory # down
>
>
>
> We’d rather something nicer than just letting the jobs go to the queue 
> and never get run.
>
>
>
> Regards,
>
> Curtis
>
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this and other 
> Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Help with cluster setup

2014-05-01 Thread Nate Coraor
On Wed, Apr 30, 2014 at 4:20 AM, 沈维燕  wrote:
>
> Hi Nate,
> From your previous email,Job deletion in the pbs runner will be fixed in
the next stable release Galaxy.So whether this bug has been fixed in the
version of Galaxy(
https://bitbucket.org/galaxy/galaxy-dist/get/3b3365a39194.zip)?Thank you
very much for your help.
>
> Regards,weiyan


Hi Weiyan,

Yes, this fix is included in the April, 2014 stable release. However, I
would strongly encourage you to use `hg clone` rather than downloading a
static tarball. There have been a number of patches to the stable branch
since its April release. In addition, the tarball linked would pull from
the "default" branch of Galaxy, which includes unstable changesets.

--nate

>
>
>
> 2013-08-08 22:58 GMT+08:00 Nate Coraor :
>>
>> On Aug 7, 2013, at 9:23 PM, shenwiyn wrote:
>>
>> > Yes,and I also have the same confuse about that.Actually when I set
server: in the universe_wsgi.ini as follows for a try,my Galaxy doesn't
work with Cluster,if I remove server:,it work .
>>
>> Hi Shenwiyn,
>>
>> Are you starting all of the servers that you have defined in
universe_wsgi.ini?  If using run.sh, setting GALAXY_RUN_ALL in the
environment will do this for you:
>>
>> http://wiki.galaxyproject.org/Admin/Config/Performance/Scaling
>>
>> > [server:node01]
>> > use = egg:Paste#http
>> > port = 8080
>> > host = 0.0.0.0
>> > use_threadpool = true
>> > threadpool_workers = 5
>> > This is my job_conf.xml :
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > walltime=24:00:00,nodes=1:ppn=4,mem=10G
>> > scripts/drmaa_external_runner.py
>> > scripts/drmaa_external_killer.py
>> > scripts/external_chown_script.py
>> > 
>> >
>> > 
>>
>> The galaxy_external_* options are only supported with the drmaa plugin,
and actually only belong in the univese_wsgi.ini for the moment, they have
not been migrated to the new-style job configuration.  They should also
only be used if you are attempting to set up "run jobs as the real user"
job running capabilities.
>>
>> > Further more when I want to kill my jobs  by clicking
 in galaxy web,the job keeps on running in my
background.I do not know how to fix this.
>> > Any help on this would be grateful.Thank you very much.
>>
>> Job deletion in the pbs runner was recently broken, but a fix for this
bug will be part of the next stable release (on Monday).
>>
>> --nate
>>
>> >
>> > shenwiyn
>> >
>> > From: Jurgens de Bruin
>> > Date: 2013-08-07 19:55
>> > To: galaxy-dev
>> > Subject: [galaxy-dev] Help with cluster setup
>> > Hi,
>> >
>> > This is my first Galaxy installation setup so apologies for stupid
questions. I am setting up Galaxy on a Cluster running Torque as the
resource manager. I am working through the documentation but I am unclear
on some things:
>> >
>> > Firstly I am unable to find : start_job_runners within the
universe_wsgi.ini and I dont want to just add this anywhere - any help on
this would be create.
>> >
>> > Further more this is my job_conf.xml :
>> >
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> >   > > Regards/Groete/Mit freundlichen Grüßen/recuerdos/meilleures
salutations/
>> > distinti saluti/siong/duì yú/привет
>> >
>> > Jurgens de Bruin
>> > ___
>> > Please keep all replies on the list by using "reply all"
>> > in your mail client.  To manage your subscriptions to this
>> > and other Galaxy lists, please use the interface at:
>> >  http://lists.bx.psu.edu/
>> >
>> > To search Galaxy mailing lists use the unified search at:
>> >  http://galaxyproject.org/search/mailinglists/
>>
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Globus tools for galaxy

2014-05-01 Thread David Hoover
Are there any tools available for transferring files using Globus?  There’s 
nothing in the toolsheds, but there was a lot of chatter a few years ago.

David Hoover
Helix Systems Staff
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Cleanest way to disable job-runs during a long cluster maintenace window?

2014-05-01 Thread Björn Grüning

Hi Curtis,

we simply remove our galaxy server from the SGE submit host list. So we 
can't submit anything to the Queue. That works quite well.


Cheers,
Bjoern

Am 01.05.2014 21:17, schrieb Curtis Hendrickson (Campus):

Peter,

Thanks for the quick reply. Saved me a lot of google/grep time.

Here's what I did to work-around (added it to the card as a comment). Anyone 
got a more thorough/elegant solution?

Regards,
Curtis


Workaround:

1. Backup migrated_tools_conf.xml, shed_tool_conf.xml and tool_conf.xml
2. edit each to remove all sections
3. edit static/welcome.html to notify users with a big orange announcement

That gives you an empty tool pane, though it doesn't prevent anyone from hitting 
"re-run" on an existing dataset, or doing something through the API.


-Original Message-
From: Peter Cock [mailto:p.j.a.c...@googlemail.com]
Sent: Thursday, May 01, 2014 1:03 PM
To: Curtis Hendrickson (Campus)
Cc: galaxy-dev PSU list-serv (galaxy-dev@lists.bx.psu.edu)
Subject: Re: [galaxy-dev] Cleanest way to disable job-runs during a long 
cluster maintenace window?

Hi Curtis,

This used to be easy - there was a setting on the admin pages to lock new job 
submission, which I would use before a planned shutdown.

See https://trello.com/c/BTDaHy9m/1269-re-institute-job-lock-feature
(and vote on it to help prioritize the issue).

Right now I'm not sure if there is an easy alternative :(

Peter


On Thu, May 1, 2014 at 6:45 PM, Curtis Hendrickson (Campus)  
wrote:

Folks



The cluster under our galaxy server will be down for a WEEK, so it can
be relocated.

The fileserver that hosts the galaxy datasets, the galaxy server and
the galaxy database will stay up.



What’s the most elegant way to disable people’s ability to run new
jobs, without blocking them from browsing existing histories and
downloading/viewing their data?



Our configuration:

 External Auth

 universe_wsgi.ini:use_remote_user =
True

 Job runner: SGE

 universe_wsgi.ini:start_job_runners =
drmaa

 and the new_file_path and job_working_directories are
on a fileserver that will be down.

grep -H /scratch/share/galaxy universe_wsgi.ini

universe_wsgi.ini:file_path = /scratch/share/galaxy/staging  #
available

universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp  # down

universe_wsgi.ini:job_working_directory =
/scratch/share/galaxy/job_working_directory # down



We’d rather something nicer than just letting the jobs go to the queue
and never get run.



Regards,

Curtis




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this and other
Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Galaxy Error Message

2014-05-01 Thread Bradley Belfiore
I am running into a problem with coloring of my KEGG image produced.  When
loading pathview and running this script in R:

pv.out<-pathview(gene.data=gse16873.d, pathway.id="04110", species="hsa",
out.suffix="gse") you get an image with full  coloration.  With the same
command in galaxy, the image produced has only white and red, Im wondering
if this is a problem within my scripts or what may be causing this:

read<-read.delim(file <- gene.data, header=TRUE )

pv.out <- pathview(gene.data=read, pathway.id= pathway.id, species=species,
out.suffix=out.suffix, kegg.native=T)

write.csv(pv.out$plot.data.gene, file=table)

file.rename(paste(species, pathway.id, ".", out.suffix, ".png", sep = ""),
output)

Thanks,

Brad




On Tue, Apr 29, 2014 at 6:30 PM, Ross  wrote:

> FWIW: looks like a trivial xml syntax error is in this line:
>
>  interpreter="Rscript"> 
> /Users/bbelfio1/galaxy-dist/tools/pathview/Pathview.R
> $gene.data $pathway.id $species $out.suffix $output 
> .*..*
>
> There seem to be one too many ">" - the first one is probably a problem?
>
> For the record, viewing your xml in firefox is sometimes very helpful for
> finding invalid xml which can otherwise be very hard to find.
>
>
>
> On Wed, Apr 30, 2014 at 2:32 AM, Bradley Belfiore 
> wrote:
>
>> The XML file is:
>> 
>>  Pathview is a tool set for pathway based data integration
>> and visualization. It maps and renders a wide variety of biological data
>> on relevant pathway graphs.
>>  interpreter="Rscript">
>> /Users/bbelfio1/galaxy-dist/tools/pathview/Pathview.R $gene.data $
>> pathway.id $species $out.suffix $output 
>> 
>> 
>> 
>>  
>> > />
>> 
>> 
>>
>> 
>> 
>>
>> With basic Rscript :
>> args <- commandArgs(TRUE)
>> ## inputs
>> gene.data <- args[1]
>> pathway.id <- args[2]
>> species <-args[3]
>> out.suffix <-args[4]
>> output <-args[5]
>>
>> suppressMessages(library("pathview"))
>> suppressMessages(library("KEGGgraph"))
>> suppressMessages(library("Rgraphviz"))
>>
>> pv.out <- pathview(gene.data=gene.data , pathway.id= pathway.id,
>> species=species, out.suffix=output, kegg.native=T)
>>
>>
>> On Tue, Apr 29, 2014 at 12:26 PM, Hans-Rudolf Hotz  wrote:
>>
>>> Hi Brad
>>>
>>> To me, this looks like an error in tool definition file (ie the xml
>>> file). Something like using $gene in the "command" tag without defining it
>>> in a "param" tag. But it is difficult to guess without seeing to full xml
>>> file.
>>>
>>>
>>> Regards Hans-Rudolf
>>>
>>>
>>> On 04/29/2014 06:13 PM, Bradley Belfiore wrote:
>>>
 Upon  going back over my script I got it working on the command line as
 suggested, and when attempting to execute in my instance of galaxy I got
 this error message which I was not sure about:

 Traceback (most recent call last):
File "/Users/bbelfio1/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
 line 152, in prepare_job
  job_wrapper.prepare()
File "/Users/bbelfio1/galaxy-dist/lib/galaxy/jobs/__init__.py",
 line 701, in prepare
  self.command_line = self.tool.build_command_line( param_dict )
File "/Users/bbelfio1/galaxy-dist/lib/galaxy/tools/__init__.py",
 line 2773, in build_command_line
  command_line = fill_template( self.command, context=param_dict )
File "/Users/bbelfio1/galaxy-dist/lib/galaxy/util/template.py",
 line 9, in fill_template
  return str( Template( source=template_text, searchList=[context] )
 )
File "/Users/bbelfio1/galaxy-dist/eggs/Cheetah-2.2.2-py2.7-
 macosx-10.6-intel-ucs2.egg/Cheetah/Template.py", line 1004, in __str__
  return getattr(self, mainMethName)()
File "DynamicallyCompiledCheetahTemplate.py", line 83, in respond
 NotFound: cannot find 'gene'



 On Mon, Apr 28, 2014 at 12:07 PM, Hans-Rudolf Hotz >>> > wrote:

 Hi Brad



 On 04/28/2014 05:26 PM, Bradley Belfiore wrote:

 So upon doing what you suggested, I get:

 bravo:galaxy-dist bbelfio1$ Rscript
 /Users/bbelfio1/galaxy-dist/__tools/pathview/Pathview.R
 '/Users/bbelfio1/Documents/__sample.txt' '04110' 'HSA'



 I said  "Rscript_wrapper.sh /Users/bbelfio1/galaxy-."

 I don't see the word "Rscript_wrapper.sh" in your line, hence it
 does not correspond to the command galaxy is executing

 You provide 3 arguments '/Users/bbelfio1/Documents/__sample.txt',

 '04110', and 'HSA'

 However in an earlier mail you mention four arguments:
 $genedata
 $pathwayid
 $species
 $output

 Error in grep(species, pathway.id 
 ) :

 argument "pathway.id 
 " is missing, with no default


 Calls: pathview -> 

[galaxy-dev] Cannot access saved history nor saved datasets

2014-05-01 Thread Joshua David Aaker
I am able to use my current history and run tools but I can't move away nor 
create new histories.

I am currently using the Cistrome instance of Galaxy.

Here is the error I receive:

Internal Server Error
Galaxy was unable to successfully complete your request
URL: http://cistrome.org/ap/history/list
Module galaxy.web.framework.middleware.error:149 in __call__
>>   app_iter = self.application(environ, 
>> sr_checker)
Module paste.recursive:84 in __call__
>>   return self.application(environ, 
>> start_response)
Module paste.httpexceptions:633 in __call__
>>   return self.application(environ, 
>> start_response)
Module galaxy.web.framework.base:132 in __call__
>>   return self.handle_request( environ, 
>> start_response )
Module galaxy.web.framework.base:190 in handle_request
>>   body = method( trans, **kwargs )
Module galaxy.web.framework:98 in decorator
>>   return func( self, trans, *args, 
>> **kwargs )
Module galaxy.webapps.galaxy.controllers.history:294 in list
>>   return self.stored_list_grid( trans, 
>> status=status, message=message, **kwargs )
Module galaxy.web.framework.helpers.grids:209 in __call__
>>   total_num_rows = query.count()
Module sqlalchemy.orm.query:2571 in count
Module sqlalchemy.orm.query:2215 in scalar
Module sqlalchemy.orm.query:2184 in one
Module sqlalchemy.orm.query:2227 in __iter__
Module sqlalchemy.orm.query:2242 in _execute_and_instances
Module sqlalchemy.engine.base:1449 in execute
Module sqlalchemy.engine.base:1584 in _execute_clauseelement
Module sqlalchemy.engine.base:1698 in _execute_context
Module sqlalchemy.engine.base:1691 in _execute_context
Module sqlalchemy.engine.default:331 in do_execute
Module MySQLdb.cursors:173 in execute
Module MySQLdb.connections:36 in defaulterrorhandler
OperationalError: (OperationalError) (1030, 'Got error 28 from storage engine') 
'SELECT count(*) AS count_1 \nFROM (SELECT history.id AS history_id, 
history.create_time AS history_create_time, history.update_time AS 
history_update_time, history.user_id AS history_user_id, history.name AS 
history_name, history.hid_counter AS history_hid_counter, history.deleted AS 
history_deleted, history.purged AS history_purged, history.importing AS 
history_importing, history.genome_build AS history_genome_build, 
history.importable AS history_importable, history.slug AS history_slug, 
history.published AS history_published \nFROM history \nWHERE history.importing 
= %s AND %s = history.user_id AND history.deleted = %s ORDER BY 
history.update_time DESC) AS anon_1' (0, 1889L, 0)

Any suggestions or more information you need?

-Josh
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/