Re: [GENERAL] Migrating from MaxDB to postgresql

2004-09-09 Thread Christopher Browne
Quoth John [EMAIL PROTECTED]:
 As per subject, i'm considering migrating a database (still in
 development) from MaxDB to postgresql. The main reason for this is
 that the stored procedures (functions) in MaxDB are unreliable and
 hard to debug, and that the JDBC driver is still experimental.

 I thought I'd post here to find out how well functions and JDBC are
 supported in postgresql.

 If anyone has info or experience on these two things, I'd be
 interested to see what your opinion is.

Some internal hackery has occasionally taken place with JDBC drivers
that have caused internal consternation (basically due to developers
needing some new features that were in the beta JDBC code), although
that seems to be fading with 7.4.  And if you don't hack on the
drivers, you won't get bitten by that sort of thing :-).

The fact that there are easily multiple levels of quoting in stored
functions certainly makes debugging a bit of a pain, but unreliability
is not one of the problems observed.  For instance, the Slony-I
replication system http://slony.info/ has a considerable portion of
its code that consists of pl/pgsql stored functions, and it would
break with great horribleness if stored functions were at all
flakey.
-- 
cbbrowne,@,linuxfinances.info
http://www.ntlug.org/~cbbrowne/
Just because you're paranoid doesn't mean they AREN'T after you. 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] 8.0.0beta2: Ownership of implicit sequences after dump/restore

2004-09-09 Thread Georgi Chorbadzhiyski
I just experienced the same problem [1] with 8.0.0beta2.

[1] http://archives.postgresql.org/pgsql-bugs/2004-08/msg00086.php

-- 
Georgi Chorbadzhiyski
http://georgi.unixsol.org/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Byte Sizes

2004-09-09 Thread Pierre-Frdric Caillaud
Hello,
* On the size of arrays :
I did not find any info in the Docs on this.
How many bytes does an array take on disk ?
	My bet would be 4 bytes for number of dimensions, 4 per dimension for the  
length, and then the storage requirements of whatever's in the array.

	If the array contains fixed size elements like integers, it'd be 4 bytes  
per element.
	However if it contains variable length elements like text, is there a  
pointer table ? Or are the elements packed together ?

Is there any advantage in using a smallint[] over an integer[] ?
Does a smallint[] with 2 elements really take 12 bytes ?
* On Alignment :
	The docs say fields are aligned on 4-bytes boundaries.
	Does this mean that several consecutive smallint fields will take 4 bytes  
each ?
	What about seleral consecutive char fields ?

	I ask this because I'll have a lot of columns with small values to store  
in a table, and would like it to be small and to fit in the cache.

Thanks for any info.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Text Search vs MYSQL vs Lucene

2004-09-09 Thread Vic Cekvenich
What would be performance of pgSQL text search vs MySQL vs Lucene (flat 
file) for a 2 terabyte db?
thanks for any comments.
.V
--
Please post on Rich Internet Applications User Interface (RiA/SoA) 
http://www.portalvu.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] MOVE doesn't return number of rows

2004-09-09 Thread Phil Endecott
Dear Postgresql experts,
According to the documentation for MOVE, it returns the number of rows 
that it has moved over.  It seems to me that this is true for MOVE 
FORWARD n, but not for MOVE RELATIVE n or MOVE ABSOLUTE n when it always 
returns 1:

db= declare c scroll cursor for select * from p;
DECLARE CURSOR
db= move absolute -1 in c;
MOVE 1
db= move absolute 1 in c;
MOVE 1
db= move absolute 10 in c;
MOVE 1
db= move relative 10 in c;
MOVE 1
db= move relative 100 in c;
MOVE 1
db= move forward 100 in c;
MOVE 100
Is this a bug?
What I'm actually trying to do is to find the number of rows in the 
query result.  I was hoping to be able to do a MOVE ABSOLUTE -1 to get 
this.  Any other suggestions?

Taking a step further back, here's the real problem: I want to present 
the user with an index page to chunks of a table, e.g.

Aardvark - Frog
Frozen - Rabbit
Rabies - Zoo
So I need to get something like rows 1,  n/3-1, n/3,  2n/3-1, 2n/3, n. 
Efficiency is of concern.  Is there some way of asking WHERE row_number 
MOD x  2?  Currently I am running a count(*) version of the query to 
get the number of rows and then running it again with a cursor and 
fetching the necessary rows.  Any other ideas?

This is with 7.4.2.
Regards,
--Phil.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] pg_dump/pg_dumpall do not correctly dump search_path

2004-09-09 Thread Ben Trewern
All,
There seems to be a bug in pg_dumpall:
For one of my dbs I've done:
ALTER DATABASE dbname SET search_path = mw, public;
If I do a pg_dumpall I get a line like:
ALTER DATABASE dbname SET search_path TO 'mw, public';
note the 's.  It's also in a place in the dump before the mw schema is 
created.  It's not a big problem but it makes dumps less automatic.

BTW If I do a pg_dump dbname I get a dump which does not reference the 
search_path change.  I'm not sure if this is by design or it is just 
missing.

I'm using PostgreSQL 7.4.5 on linux
Thanks for any help.
Ben
_
Stay in touch with absent friends - get MSN Messenger 
http://www.msn.co.uk/messenger

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Display of text fields

2004-09-09 Thread Richard Huxton
Ennio-Sr wrote:
Further to my message of 3rd inst. 
Following your suggestion and after reading some documents, I created
this sql script:

-
SELECT scheda_ltr, 
case scheda_ltr
when 'T' then
select * from bib_lt;
else
'autore, titolo, editore from bib_lt;'
end
FROM bib_lt;
-
but the result is not what I was after: I get a list with either label
according to scheda_ltr being 'T' or not!
Is there any way, once the case is spotted, to obtain execution of the
query relating to that case, instead of just showing the label?
Of course I tried without the quotes obtaining parser error.
Ah - looks like I misunderstood what you were trying to do. There is no 
way to have a single query return rows with different numbers of columns 
- each row must be the same.

You'd have to do something like one of the following (substitute 
my_memo_column with whatever your memo field was called).

SELECT
  scheda_ltr,
  autore,
  titolo,
  editore,
  CASE
WHEN scheda_ltr = 'T' THEN my_memo_column
ELSE 'n/a'
  END AS my_memo_column
FROM
  bib_lt;
or...
SELECT
  scheda_ltr,
  CASE
WHEN scheda_ltr='T' THEN autore || ' / ' || titolo || ' / ' || editore
ELSE my_memo_column
  END AS merged_columns
FROM
  bib_lt;
HTH
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql hanging (blocking) with smp kernel

2004-09-09 Thread Gaetano Mendola
Marcel Groner wrote:
I have a problem with postgresql runnung on smp kernel.
setup:
master:
---
- Pentium 4 (hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2188.nptlsmp
- postgresql: 7.4.3-1PGDG
slave 1:

- Pentium 4 (hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2115.nptlsmp
- postgresql: 7.4.3-1PGDG
slave 2:

- Double Xeon (with hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2199.nptlsmp
- postgresql: 7.4.3-1PGDG
the replication is made by hand. reading from slave 1 or slave 2
(balanced) and wrtiting to master, slave 1 and slave 2. Our site is a
high-traffic site (the biggest dating-site in switzerland:
www.swissflirt.ch) with  1400 concurrent users and  40'000 visits
per day.
master and slave 1 (with pentium 4) are working perfectly with
smp-kernel. slave 2 (with double xeon) has big problems. running p.e.
with kernel 2.4.22 (non smp) works also but of course only one CPU is
used. when I use the smp kernel, the connections (and queries) are
hanging (blocking) after some short time. Also when I shutdown the
application (using java and jdbc to connect to the databases) the
postgres-processes (on slave 2) keep existing while the processes on
master and slave 1 shutdown propertly.
Hanging on what ? I'm running postgres in a similar server ( the HT
is enabled too so is like 4 CPU)  without any problem at all.
Show as select * from pg_stat_activity
select * from pg_locks
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Text Search vs MYSQL vs Lucene

2004-09-09 Thread Shridhar Daithankar
On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:
 What would be performance of pgSQL text search vs MySQL vs Lucene (flat
 file) for a 2 terabyte db?

Well, it depends upon lot of factors. There are few questions to be asked 
here..
- What is your hardware and OS configuration?
- What type of data you are dealing with? Mostly static or frequently updated?
- What type of query you are doing. Aggregates or table scan or selective 
retreival etc.

Unfortunately there is no one good answer. If you could provide details, it 
would help a lot..

 Shridhar

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] ERROR: parser: unterminated quoted string

2004-09-09 Thread Jeff Eckermann
--- Daniel Secomb [EMAIL PROTECTED] wrote:

 Hi,
 
 I'm using PostgreSQL 7.3.4 on phpPgAdmin 2.4.2 and
 I'm getting the following
 error when I try and run a SQL query on a script:
 
 Error -

/usr/local/psa/admin/htdocs/domains/databases/phpPgAdmin/db_readdump.php
 --
 Line: 33
 PostgreSQL said: ERROR: parser: unterminated quoted
 string at or near
 'Black Tea\015\012Fresh range of leaf tea for
 drinking black or with
 milk,\015\012including Ceylons, Assams and
 favourites like Irish
 Breakfast.\015\012\015\012Green Tea\015\012First
 quality Japanese  Chinese
 popular green teas, including Bancha, Genmaicha 
 flavoured green
 teas.\015\012\015\012Flavoured Tea\015\012Exotic and
 delicious, favourites
 are Vanilla, Arctic Fire  Lady
 Grey.\015\012\015\012Herbal
 Infusions\015\012Pure flowers and whole leaves of
 calming herbs, 
 a\015\012delightful range of specially blended
 herbal infusions Fruit
 Melanges; at character 566
 
 
 
 Any idea as to why this is doing this?

Because there is an unterminated quoted string?  I see
a single quote just before Black Tea, but I don't
see another anywhere in that string.

 
 Thank you for your help.
 
 Dan
 




__
Do you Yahoo!?
Yahoo! Mail is new and improved - Check it out!
http://promotions.yahoo.com/new_mail

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Heritage

2004-09-09 Thread Mark Gibson
Sebastian Davancens wrote:
Hi everyone. Im having a problem with heritage. The situation is the following:
I have two tables, tbl_everyone and tbl_employees. tbl_employees
inherits from tbl_everyone.
In tbl_everyone, i store some information about everyone who is
related with the place where i work: Name, ID (PK), Birth Date,
Adress...
Then, in tbl_employees i have aditional information, like area, position, etc.
The problem appears when i have someone in tbl_everyone that becomes
an employee. What do i have to do ? Consider that ID in tbl_everyone
is used in auxiliar tables, so i cannot easily delete the person in
tbl_everyone and insert it again in tbl_employees...
Have you tried deferred constraints, eg:
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
DELETE ...;
INSERT ...;
END;
I've haven't had chance to test this, but I think this could
be what you're looking for.
--
Mark Gibson gibsonm |AT| cromwell |DOT| co |DOT| uk
Web Developer  Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Text Search vs MYSQL vs Lucene

2004-09-09 Thread Steve Atkins
On Thu, Sep 09, 2004 at 07:56:20AM -0500, Vic Cekvenich wrote:

 What would be performance of pgSQL text search vs MySQL vs Lucene (flat 
 file) for a 2 terabyte db?
 thanks for any comments.

My experience with tsearch2 has been that indexing even moderately
large chunks of data is too slow to be feasible. Moderately large
meaning tens of megabytes.

Your milage might well vary, but I wouldn't rely on postgresql full
text search of that much data being functional, let alone fast enough
to be useful. Test before making any decisions.

If it's a static or moderately static text corpus you're probably
better using a traditional FTS system anyway (tsearch2 has two
advantages - tight integration with pgsql and good support for
incremental indexing).

Two terabytes is a lot of data. I'd suggest you do some research on
FTS algorithms rather than just picking one of the off-the-shelf FTS
systems without understanding what they actually do. Managing
Gigabytes ISBN 1-55860-570-3 covers some approaches.

Cheers,
  Steve

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Text Search vs MYSQL vs Lucene

2004-09-09 Thread Vic Cekvenich
It be at least dual opteron 64 w 4 gigs of ram runing fedora with a huge 
raid striped drives as single volume.
A similar system and types of querries would be this:
http://marc.theaimsgroup.com

So I guess a table scan.
.V
Shridhar Daithankar wrote:
On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:
 

What would be performance of pgSQL text search vs MySQL vs Lucene (flat
file) for a 2 terabyte db?
   

Well, it depends upon lot of factors. There are few questions to be asked 
here..
- What is your hardware and OS configuration?
- What type of data you are dealing with? Mostly static or frequently updated?
- What type of query you are doing. Aggregates or table scan or selective 
retreival etc.

Unfortunately there is no one good answer. If you could provide details, it 
would help a lot..

Shridhar
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 


--
Please post on Rich Internet Applications User Interface (RiA/SoA) 
http://www.portalvu.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] How to determine a database is intact?

2004-09-09 Thread Vivek Khera
 TL == Tom Lane [EMAIL PROTECTED] writes:

TL Wes [EMAIL PROTECTED] writes:
 There's more than 250 million rows.  If I remember right, it's ballpark 25%
 data reload, 75% index/foreign constraint rebuild.  Pg_dumpall is something
 like 3 hours or so.

TL FWIW, increasing sort_mem for the reload process would probably help
TL with the index and FK rebuilds.  (8.0 rejiggers things so that the

I shaved significant time from 7.4.x restores by bumping up the
checkpoint_segments to 50.  My dumps currently take about 1.5 hours
over a 100Mbit ethernet (server is on gig ethernet, dumping client is
on 100Mbit).

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Text Search vs MYSQL vs Lucene

2004-09-09 Thread David Garamond
Steve Atkins wrote:
What would be performance of pgSQL text search vs MySQL vs Lucene (flat 
file) for a 2 terabyte db?
thanks for any comments.
My experience with tsearch2 has been that indexing even moderately
large chunks of data is too slow to be feasible. Moderately large
meaning tens of megabytes.
My experience with MySQL's full text search as well as the various 
MySQL-based text indexing programs (forgot the names, it's been a while) 
for some 10-20GB of mail archives has been pretty disappointing too. My 
biggest gripe is with the indexing speed. It literally takes days to 
index less than a million documents.

I ended up using Swish++. Microsoft's CHM compiler also has pretty 
amazing indexing speed (though it crashes quite often when encountering 
bad HTML).

--
dave
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [JDBC] [GENERAL] ERROR: canceling query due to user request

2004-09-09 Thread Tom Lane
Paul Thomas [EMAIL PROTECTED] writes:
 On 09/09/2004 10:41 Oliver Jowett wrote:
 JDBC gives you no way to ensure you only call cancel() on a running query 
 (there's a race between query execution returning and the call to 
 cancel()). Calling cancel() on a statement that's not currently executing 
 should do nothing; if it ends up cancelling a future query, it's a driver 
 bug.

 Thanks for the explaination Oliver. Maybe there is a driver bug then?

IIRC there was such a bug at one time, but I thought it had been fixed.
Maybe the problem is use of an old driver?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] How to determine a database is intact?

2004-09-09 Thread Bruce Momjian

Believe it or not, we haven't gotten many requests for this feature,
partly because such corruption is so rare.  Also, any checker isn't
going to find a change from Baker to Faker in a text field.

---

Wes wrote:
 On 9/4/04 5:28 PM, Tino Wildenhain [EMAIL PROTECTED] wrote:
 
  Well, with such a huge database you probably should consider
  different backup strategies, a filesystem with snapshot
  support (XFS?) could help where you can copy a state of the database
  at any time - so you can backup the database cluster without
  stopping the postmaster. Also replication via slony could be
  an option. 
 
 Yes, we are looking into using file system snapshots.  We are currently
 using primarily file system backups (shut down the DB, back up the file
 system).  The problem we ran into was that we didn't have a specific point
 in time where we knew with absolute certainty the backed up database was
 good - snapshots would not help here.
 
 I ended up starting with a recent backup, and working backwards until I
 found one that wouldn't crash postmaster on a pg_dumpall.  Rather than trust
 that there was no corruption in that version (data blocks might be good, but
 pg_dumpall doesn't test indexes), I did a pg_dumpall and reload.
 
  The best tool to verify the backup is probably the postmaster
  itself. I really doubt any other program would be smaller and
  faster :)
 
 Not really...  Postmaster won't tell you if a structure is bad until it
 stumbles on it and crashes (or politely reports an error).  Just because
 postmaster comes up doesn't mean your database is good.  As far as I know,
 there is no verify database command option on postmaster - postmaster
 won't fsck your database.
 
 Wes
 
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] MOVE doesn't return number of rows

2004-09-09 Thread Tom Lane
Phil Endecott [EMAIL PROTECTED] writes:
 According to the documentation for MOVE, it returns the number of rows 
 that it has moved over.

That's a documentation bug.  CVS tip has the correct explanation:

The count is the number of rows that a FETCH command with the same
parameters would have returned (possibly zero).

 What I'm actually trying to do is to find the number of rows in the 
 query result.  I was hoping to be able to do a MOVE ABSOLUTE -1 to get 
 this.  Any other suggestions?

MOVE FORWARD ALL.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Upgrading PostgreSQL 7.1.3

2004-09-09 Thread Frank Finner
On Wed, 25 Aug 2004 10:04:54 +0200 Michal Taborsky [EMAIL PROTECTED] sat
down, thought long and then wrote:

 [EMAIL PROTECTED] wrote:
  What is the most advisable version I should upgrade to? My top goal is 
  reliability. Do I have to install/reinstall or upgrade? Do I need to be 
  concerned of any special feature when dumping/restoring data? Is there 
  any good doc about this?
 

...
 
 This worked for me. It is a long way from 7.1 to 7.4 so expect some 
 fiddling with the pg_dump parameters and maybe some sed-ing in between.

If I had to do this long jump, I would use INSERTs instead of COPYs, that is,
the option -D (full inserts) on pg_dump. It takes much longer to read the data
in, but IMHO it´s the more secure way to get the data without trouble. If your
server is big enough, do it within a single transaction, or do it with one
transaction per (big) table to save time and trouble with half filled tables,
if something goes wrong..

And I would do the data insert separately from the database and table creation,
that is, first a pg_dump -s and restore to the new engine, then, when the
structure is ready, insert data you dumped with pg_dump -a -D.

At least this worked for me rather smoothly from 7.1 to 7.3.
-- 
Frank Finner

Memory follows memory, memory defeats memory; some things are banished
only into the realms of our rich imaginings  -  but this does not mean
that they do not or cannot or will not exist - they exist! They exist!
  (M. Moorcock, The Revenge Of The Rose)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Obtaining the Julian Day from a date

2004-09-09 Thread Karl O. Pinc
Hi,
What's the best way to obtain the Julian day from a postgresql
date?
PostgreSQL 7.3.4 on i386-redhat-linux-gnu, compiled by GCC 
i386-redhat-linux-gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5)

I'm doing some date arithmetic with 1 day intervals and want
to, for example, round to the even Julian day.  I suppose
I could always take the interval from julian day zero
and then divide by the number of seconds in a day, but that
sounds both brutal and potentially inaccurate due to leap
seconds and so forth.
There's mention of being able to do  this in the list archives,
but nobody says how it's actually done.
Thanks.
Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] How to determine a database is intact?

2004-09-09 Thread Wes
On 9/9/04 11:07 AM, Bruce Momjian [EMAIL PROTECTED] wrote:

 Believe it or not, we haven't gotten many requests for this feature,
 partly because such corruption is so rare.  Also, any checker isn't
 going to find a change from Baker to Faker in a text field.

Yep, unless you added a CRC (and accepted the performance hit) to each
record the best you could do is verify that the database is consistent.
That would still be quite valuable, though - all block headers are valid,
indexes don't point out into oblivion, etc.

I expect there are only a handful of huge databases running a heavy load -
the vast majority are probably tens no larger than 10's (maybe hundreds) of
megabytes, or do not experience a heavy update load?

Wes


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] How to determine a database is intact?

2004-09-09 Thread Wes
On 9/9/04 9:27 AM, Vivek Khera [EMAIL PROTECTED] wrote:

 I shaved significant time from 7.4.x restores by bumping up the
 checkpoint_segments to 50.  My dumps currently take about 1.5 hours
 over a 100Mbit ethernet (server is on gig ethernet, dumping client is
 on 100Mbit).

Mine are already set to 30, but in a recent re-index, I saw warnings in the
log so need to bump it up some more.  I'm going to likely be doing a reload
this weekend, so will include this in the run.

Wes


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])