[GENERAL] storing files: blob, toasted text of filesystem?

2004-10-03 Thread Joolz
Hello everyone,

Sorry if this is a FAQ, but I've groups.googled the subject and
can't find a definite answer (if such a thing exists). I'm working
on a db in postgresql on a debian stable server, ext3 filesystem.
The db will contain files, not too many (I expect somewehere between
10 and 100 files max to be inserted daily), and not too big (mostly
pdf files, some images. The size will rarely be larger than 1Mb).

My plan was to store the files in the db as BLOBs, which seemed the
most elegant solution because these files are logically related to
objects that are in the db (customers, people etc.) Recently someone
warned me that this would have a large performance impact on the db
and said it's better to store the files on the filesystem and keep
some sort of pointer in the db.

Google was contradictory, some people even had performance problems
when using the filesystem/pointer approach and went to BLOBs for
that reason. Can anyone tell me (or point me in the right direction)
what is the best way to do this, BLOBs / filesystem+pointer /
toasted text?

Thanks!


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] storing files: blob, toasted text of filesystem?

2004-10-03 Thread Kristian Rink

Hi there, Joolz;


On Sun, 3 Oct 2004 10:48:25 +0200 (CEST)
"Joolz" <[EMAIL PROTECTED]> wrote:

> Google was contradictory, some people even had performance
> problems when using the filesystem/pointer approach and went to
> BLOBs for that reason. Can anyone tell me (or point me in the
> right direction) what is the best way to do this, BLOBs /
> filesystem+pointer / toasted text?

Though not running postgresql for that solution: We are running an
enterprise-scaled document management system to keep track of
currently > 2*10^3 documents (mostly *.hpgl and *.plt files, some
*.pdfs and *.zips in between), and the (proprietary) dms software we
are using for that purpose exclusively relies on the
filesystem+pointer approach instead of storing all the files inside
the tablespace of the database.

Even while the current system is in migration due to some severe
limits of the current setup (Windows NT 4 servers running an
old MSSQL...:/ ), by now this concept seems to work well there even
with both a database and a filesystem which don't really go for top
performance.

Hope this helps, have a nice weekend.
Kris



-- 
Kristian Rink   -- Programmierung/Systembetreuung
planConnect GmbH * Strehlener Str. 12 - 14 * 01069 Dresden
0351 4657702 * 0176 24472771 * [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] storing files: blob, toasted text of filesystem?

2004-10-03 Thread Kristian Rink
On Sun, 3 Oct 2004 12:34:57 +0200
Kristian Rink <[EMAIL PROTECTED]> wrote:

> Though not running postgresql for that solution: We are running an
> enterprise-scaled document management system to keep track of
> currently > 2*10^3 documents (mostly *.hpgl and *.plt files, some
^^^

Should be "2*10^6", of course - messing with 2000 files probably
ain't that challenging after all... 


Cheers,
Kris

-- 
Kristian Rink   -- Programmierung/Systembetreuung
planConnect GmbH * Strehlener Str. 12 - 14 * 01069 Dresden
0351 4657702 * 0176 24472771 * [EMAIL PROTECTED]



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Christopher Browne wrote:
> [EMAIL PROTECTED] (Aleksey Serba) wrote:
>
>>   Hello!
>>
>>   I have 24/7 production server under high load.
>>   I need to perform vacuum full on several tables to recover disk
>>   space / memory  usage frequently ( the server must be online during
>>   vacuum time )
>
>
> The main thought is: "Don't do that."
>
> It is almost certainly the wrong idea to do a VACUUM FULL.
>
> Assuming that the tables in question aren't so large that they cause
> mass eviction of buffers, it should suffice to do a plain VACUUM (and
> NOT a "VACUUM FULL") on the tables in question quite frequently.
This is easy to say and almost impraticable. I run a 7.4.5 with the autovacuum:
pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
I have also a "vacuumdb -z -v -a"  running each six hours and if i don't execute
a vacuum FULL for one weeks I collect almost 400 MB of dead rows :-(
For this reason even with a 7.4.5 I'm obliged to run a vacuum full at least once
a week and a reindex once a month.
And my FSM parameters are large enough:
INFO:  free space map: 141 relations, 26787 pages stored; 26032 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 200 pages = 11780 kB shared memory.
Regards
Gaetano Mendola
PS: I do not have any "idle in transaction" connections around.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Random not so random

2004-10-03 Thread Arnau Rebassa
Hi everybody,
 I'm doing the following query:
   select * from messages order by random() limit 1;
 in the table messages I have more than 200 messages and a lot of times, 
the message retrieved is the same. Anybody knows how I could do a more 
"random" random?

 Thank you very much
--
Arnau
_
¿Cuánto vale tu auto? Tips para mantener tu carro. ¡De todo en MSN Latino 
Autos! http://latino.msn.com/autos/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] sequence rename?

2004-10-03 Thread ben f
So I am renaming a table, and the last stumbling block
that I've met is the associated sequence.  I tried the
commands suggested @ 

http://mailman.fastxs.net/pipermail/dbmail-dev/2004-August/004307.html

ie:

CREATE SEQUENCE $newseq
SELECT setval('$newseq', max($column)) FROM $table
ALTER TABLE $table ALTER COLUMN $column SET DEFAULT
nextval('$newseq'::text)
DROP SEQUENCE $oldseq

but when trying to perform the DROP SEQUENCE (psql), i
get a message like: 

ERROR:  Cannot drop sequence $oldseq because table
$table column $column requires it
You may drop table $table column $column instead


After that, I tried the query suggested here:

http://www.commandprompt.com/ppbook/index.lxp?lxpwrap=x14316%2ehtm#REMOVINGASEQUENCE
(example 7-34)

And it came back empty.

What am I doing wrong?  when i \d $table, it shows no
such dependency.  Is there another way to pull this
off?

thanks

ben

ps -- please cc responses directly to me, since i'm
not a subscriber.



___
Do you Yahoo!?
Express yourself with Y! Messenger! Free. Download now. 
http://messenger.yahoo.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] See the good, the bad, and the ugly of your Postgres data with OneClickRevelation(tm)

2004-10-03 Thread Damodar Periwal
--
   Software Tree Revs Up JDX OR-Mapper 
With Innovative And High-Performance Features
--

Software Tree has announced JDX 4.5, the versatile and patented
Object-Relational Mapping (OR-Mapping) software that significantly
accelerates the development of Java/J2EE applications by eliminating
tedious, low-level SQL coding for persistence.   The novel feature of
OneClickRevelation(tm) provides instant and interactive insight into
existing data in relational sources like Postgres.  Object caching
provides super-fast access to business objects from high-performance
memory cache, thereby avoiding costly database trips.  A third-party
clustered cache like Tangosol Coherence can optionally back the object
cache.

The new OneClickRevelation feature of JDXStudio(tm) GUI tool unlocks
and presents relational data in an intuitive graphical format with
just one click of a button. This innovative method enables application
developers to quickly and easily view and analyze their data, offering
them fresh insights and approaches to harness the power of the data
without requiring a single line of programming.

Object caching improves query performance significantly. By finding
objects in the local memory cache, this feature avoids time-consuming
trips to the database, resulting in fast response time as well as
better throughput. Support for third-party clustered cache (e.g.,
Coherence from Tangosol) further improves the effectiveness of the JDX
cache in a clustered environment.

Press Release:
http://www.softwaretree.com/press/JDX45Sept2004.htm

JDX Highlights:
http://www.softwaretree.com/products/jdx/JDXHighlights.htm

Please visit http://www.softwaretree.com for a free evaluation
download of JDX 4.5.


Best regards,

-- Damodar Periwal

Software Tree, Inc.
Simplify Data Integration
http://www.softwaretree.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] earthdistance is not giving correct results.

2004-10-03 Thread Edmund Bacon
[EMAIL PROTECTED] (mike cox) writes:

> I'm running PostgreSQL 8.0 beta 1.  I'm using the
> earthdistance to find the distance between two
> different latitude and logitude locations. 
> Unfortunately, the result seems to be wrong.
> 
> Here is what I'm doing:
> select
> earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
> 
> The result I get is this:
> 

I believe ll_to_earth() is expecting ll_to_earth(latitude, longitude), 

Also, I think earth_distance returns it's value in meters.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] earthdistance results seem to be wrong.

2004-10-03 Thread Mike Cox
I'm running PostgreSQL 8.0 beta 1.  I'm using the
earthdistance to find the distance between two
different latitude and logitude locations.
Unfortunately, the result seems to be wrong.

Here is what I'm doing:
select
earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));

The result I get is this:

128862.563227506

The distance from Portland to Seattle is not 128862
miles.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Query problem...

2004-10-03 Thread Mike Rylander
You may want to take a look at the ltree and tablefunc contrib
modules.  They both allow you to do something like this, and the
abstract away the difficulty of query building.  ltree will allow you
to precompute the tree, and the tablefunc module has a connectby()
function for runtime parent-child relationship evaluation.


On Sat, 2 Oct 2004 15:12:46 -0700, Net Virtual Mailing Lists
<[EMAIL PROTECTED]> wrote:
> Hello,
> 
> I have 3 tables which are joined that I need to create a summation for
> and I just cannot get this to work.
> 
> Here's an example:
> 
> CREATE table1 (
> id1INTEGER,
> title1 VARCHAR
> );
> INSERT INTO table1 (1, 'Heading #1');
> INSERT INTO table1 (2, 'Heading #2');
> 
> CREATE table2 (
> id1INTEGER,
> id2INTEGER,
> title2 VARCHAR
> );
> INSERT INTO table2 (1, 1, 'Category #1.1');
> INSERT INTO table2 (1, 2, 'Category #1.2');
> INSERT INTO table2 (2, 1, 'Category #2.1');
> INSERT INTO table2 (2, 2, 'Category #2.2');
> 
> CREATE table3 (
> id1INTEGER,
> id2INTEGER,
> id3INTEGER,
> title3 VARCHAR
> );
> INSERT INTO table2 (1, 1, 1, 'Sub-Category #1.1.1');
> INSERT INTO table2 (1, 1, 2, 'Sub-Category #1.1.2');
> INSERT INTO table2 (1, 2, 1, 'Sub-Category #1.2.1');
> INSERT INTO table2 (1, 2, 2, 'Sub-Category #1.2.2');
> INSERT INTO table2 (2, 1, 1, 'Sub-Category #2.1.1');
> INSERT INTO table2 (2, 1, 2, 'Sub-Category #2.1.2');
> INSERT INTO table2 (2, 2, 1, 'Sub-Category #2.2.1');
> INSERT INTO table2 (2, 2, 2, 'Sub-Category #2.2.2');
> 
> What I am trying to represent is some sort of hierarchical data here, for
> example:
> 
> Heading #1
> Category #1.1
> Sub-Category #1.1.1
> Sub-Category #1.1.2
> Cateogry #1.2
> Sub-Category #1.2.1
> Sub-Category #1.2.2
> Heading #2
> Category #2.1
> Sub-Category #2.2.1
> Sub-Category #2.2.2
> Cateogry #2.2
> Sub-Category #2.2.1
> Sub-Category #2.2.2
> 
> ... I hope that makes sense.. Perhaps I'm going about this the wrong way
> to begin with?
> 
> In any event, the problem is now I have another table which uses these
> tables.  For example:
> 
> CREATE TABLE blech (
>somedata  VARCHAR,
>id1   INTEGER,
>id2   INTEGER,
>id3   INTEGER
> );
> 
> INSERT INTO TABLE blech ('Record #1', 1, 1, 1);
> INSERT INTO TABLE blech ('Record #2', 1, 1, 1);
> INSERT INTO TABLE blech ('Record #3', 1, 2, 1);
> INSERT INTO TABLE blech ('Record #4', 1, 1, 2);
> INSERT INTO TABLE blech ('Record #5', 2, 1, 1);
> 
> ... etc... (NOTE: id1, id2, and id3 cannot be NULL in this table)
> 
> What I want is a query that will give me this:
> 
> count |  id1   |   id2   | id3
> --
>4  |   1| |
>3  |   1|1|
>1  |   1|1|  1
>1  |   1|1|  2
>1  |   1|2|
>1  |   1|2|  1
>1  |   2| |
>1  |   2|1|
>1  |   2|1|  1
> 
> I've tried all manner of LEFT JOINs, GROUP BYs, and even tried using
> UNION, but I just can't seem to get the results I need.  I'm definitely
> not married to this type of schema, if there is a more efficient way of
> handling this I'm all for it.
> 
> Thanks as always!
> 
> - Greg
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] guaranteeing that a sequence never skips

2004-10-03 Thread David Garamond
Am I correct to assume that SERIAL does not guarantee that a sequence 
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?

Sometimes a business requirement is that a serial sequence never skips, 
e.g. when generating invoice/ticket/formal letter numbers. Would an 
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, 
or must I install a trigger too to do additional checking?

--
dave
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] sequence rename?

2004-10-03 Thread Alvaro Herrera
On Fri, Oct 01, 2004 at 01:17:38PM -0700, ben f wrote:
> So I am renaming a table, and the last stumbling block
> that I've met is the associated sequence.  I tried the
> commands suggested @ 
> 
> http://mailman.fastxs.net/pipermail/dbmail-dev/2004-August/004307.html
> 
> ie:
> 
> CREATE SEQUENCE $newseq
> SELECT setval('$newseq', max($column)) FROM $table
> ALTER TABLE $table ALTER COLUMN $column SET DEFAULT
> nextval('$newseq'::text)
> DROP SEQUENCE $oldseq

How about

ALTER TABLE $oldseq RENAME TO $newseq;

-- 
Alvaro Herrera ()
"Y eso te lo doy firmado con mis lágrimas" (Fiebre del Loco)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] earthdistance results seem to be wrong.

2004-10-03 Thread Chris Mair

> select
> earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
> 
> The result I get is this:
> 
> 128862.563227506
> 
> The distance from Portland to Seattle is not 128862
> miles.

It is 128000m = 128km.

Welcome to the metric system :)

Bye, Chris.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] earthdistance is not giving correct results.

2004-10-03 Thread Jean-Luc Lachance
I agree, NS or EW long lat should be the same.
I was just pointing to the wrong figure.  Also, if ll_to_earth takes lat 
first, it should report an error for a |lat| > 90...

Michael Fuhr wrote:
On Sat, Oct 02, 2004 at 09:29:16PM -0400, Jean-Luc Lachance wrote:
Maybe it would work with the right long & lat...
try
Protland OR -122.67555, 45.51184
Seattle WA -122.32956, 47.60342

It doesn't matter which hemisphere the longitudes are in as long
as they're in the same hemisphere:
test=> select earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
  earth_distance  
--
 128862.563227506
(1 row)

test=> select earth_distance(ll_to_earth('-122.55688','45.513746'),ll_to_earth('-122.396357','47.648845'));
  earth_distance  
--
 128862.563227506
(1 row)

What *does* matter is that one specify (lat, lon) instead of
(lon, lat):
test=> select earth_distance(ll_to_earth('45.513746', '122.55688'),ll_to_earth('47.648845', '122.396357'));
  earth_distance  
--
 237996.256627247
(1 row)

That's 238km, or about 148mi; using your coordinates gives almost
the same answer, about 234km or 146mi.  As I said, the distance
between Portland and Seattle is around 150mi.

Also, do not forget that it is the line distance not the driving distance.

I doubt anybody thought that earth_distance() was calculating driving
distance.  How would it know what route to follow without an extensive
road database and a route specification?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Out of memory errors on OS X

2004-10-03 Thread Scott Ribe
> I have asked Apple about using a saner default for shmmax, but a few
> more complaints in their bug system wouldn't hurt.

I suspect it won't help, since their official position is already "don't use
shmget, use mmap instead"...


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 665-7007 voice



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Out of memory errors on OS X

2004-10-03 Thread Tom Lane
Scott Ribe <[EMAIL PROTECTED]> writes:
>> I have asked Apple about using a saner default for shmmax, but a few
>> more complaints in their bug system wouldn't hurt.

> I suspect it won't help, since their official position is already "don't use
> shmget, use mmap instead"...

Given that they have improved their SysV IPC support steadily over the
past few Darwin releases, I don't see why you'd expect them to not be
willing to do this.  Having a larger default limit costs them *zero* if
the feature is not used, so what's the objection?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] guaranteeing that a sequence never skips

2004-10-03 Thread Scott Marlowe
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> Am I correct to assume that SERIAL does not guarantee that a sequence 
> won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> 
> Sometimes a business requirement is that a serial sequence never skips, 
> e.g. when generating invoice/ticket/formal letter numbers. Would an 
> INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, 
> or must I install a trigger too to do additional checking?

You will have to lock the whole table and your parallel performance will
be poor.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Tom Lane
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> Christopher Browne wrote:
>>> Assuming that the tables in question aren't so large that they cause
>>> mass eviction of buffers, it should suffice to do a plain VACUUM (and
>>> NOT a "VACUUM FULL") on the tables in question quite frequently.

> This is easy to say and almost impraticable. I run a 7.4.5 with the autovacuum:

> pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8

I'm not very familiar at all with appropriate settings for autovacuum,
but doesn't the above say to vacuum a table only when the dead space
reaches 50%?  That seems awfully lax to me.  I've always thought one
should vacuum often enough to keep dead space to maybe 10 to 25%.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] guaranteeing that a sequence never skips (fwd)

2004-10-03 Thread Mike Nolan
> On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > Am I correct to assume that SERIAL does not guarantee that a sequence 
> > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> > 
> > Sometimes a business requirement is that a serial sequence never skips, 
> > e.g. when generating invoice/ticket/formal letter numbers. Would an 
> > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, 
> > or must I install a trigger too to do additional checking?
> 
> You will have to lock the whole table and your parallel performance will
> be poor.

Locking the table isn't sufficient to guarantee that a sequence value
never skips.  What if a transaction fails and has to be rolled back?

I've written database systems that used pre-numbered checks, what's usually
necessary is to postpone the check-numbering phase until the number of
checks is finalized, so that there's not much chance of anything else 
causing a rollback.  
--
Mike Nolan

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] guaranteeing that a sequence never skips

2004-10-03 Thread Uwe C. Schroeder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sunday 03 October 2004 10:21 am, Scott Marlowe wrote:
> On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > Am I correct to assume that SERIAL does not guarantee that a sequence
> > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> >
> > Sometimes a business requirement is that a serial sequence never skips,
> > e.g. when generating invoice/ticket/formal letter numbers. Would an
> > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
> > or must I install a trigger too to do additional checking?
>
> You will have to lock the whole table and your parallel performance will
> be poor.
>

There was a thread about this a while back. I'm using a separate counter table 
and stored procs that increment the value of the counter - similar to nextval 
used for sequences. My "nextval" locks the "counterrow" in question using 
"...for update". So while I'm generating the record that requires the 
sequential number I'm in the same stored proc and therefor in a transaction.
If I have to roll back, the counter number in the countertable will roll back 
too. You just have to make sure your routine to completely generate whatever 
you have to generate doesn't take long, because parallel uses of the same 
thing will block until your proc commits or rolls back.
 
UC

- --
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFBYD6KjqGXBvRToM4RAgFOAKCeJnwA6PnXquCrUMwGbR9tQZBxdgCdGqyy
nwNbHafAiInSX+WTh5Uzb4o=
=Uixo
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] guaranteeing that a sequence never skips (fwd)

2004-10-03 Thread Scott Marlowe
On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
> > > Am I correct to assume that SERIAL does not guarantee that a sequence 
> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
> > > 
> > > Sometimes a business requirement is that a serial sequence never skips, 
> > > e.g. when generating invoice/ticket/formal letter numbers. Would an 
> > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, 
> > > or must I install a trigger too to do additional checking?
> > 
> > You will have to lock the whole table and your parallel performance will
> > be poor.
> 
> Locking the table isn't sufficient to guarantee that a sequence value
> never skips.  What if a transaction fails and has to be rolled back?
> 
> I've written database systems that used pre-numbered checks, what's usually
> necessary is to postpone the check-numbering phase until the number of
> checks is finalized, so that there's not much chance of anything else 
> causing a rollback.  
> --

I didn't mean to use a sequence, sorry for being vague. I meant this:

lock table
select max(idfield)+1
insert new row
disconnect.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] earthdistance is not giving correct results.

2004-10-03 Thread Bruno Wolff III
On Sun, Oct 03, 2004 at 11:36:20 -0400,
  Jean-Luc Lachance <[EMAIL PROTECTED]> wrote:
> I agree, NS or EW long lat should be the same.
> I was just pointing to the wrong figure.  Also, if ll_to_earth takes lat 
> first, it should report an error for a |lat| > 90...

I disagree with this. Latitudes greater than 90 degrees have a reasonable
meaning and it can be useful to use 0 to 180 instead of -90 to 90.
The same thing applies to longitude.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] OT moving from MS SQL to PostgreSQL

2004-10-03 Thread Gary Doades
On 3 Oct 2004 at 11:24, Scott Marlowe wrote:

> On Sun, 2004-10-03 at 06:33, stig erikson wrote:
> There are a few tools I've seen that will try to convert ASP to PHP, but
> for the most part, they can't handle very complex code, so you're
> probably better off just rewriting it and learning PHP on the way.
> 
> By the way, I have moved this over to -general, as this is quite off
> topic for -hackers.  Next person to reply please remove the
> pgsql-hackers address from the CC list please.
> 

Also you might want to try converting it to ASP.NET. If you use the 
mono packages you can run ASP.NET on Windows and Linux/Unix with 
very little change (if any).

I have done this just fine with some C# ASP.NET stuff using Apache, 
PostgreSQL and mod-mono. Just needed recompilation because mono 
doesn't understand the debug stuff the MS produces. Otherwise no 
changes.

Cheers,
Gary.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] guaranteeing that a sequence never skips

2004-10-03 Thread Christopher Browne
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (David Garamond) wrote:
> Am I correct to assume that SERIAL does not guarantee that a sequence
> won't skip (e.g. one successful INSERT gets 32 and the next might be
> 34)?

What is guaranteed is that sequence values will not be repeated
(assuming you don't do a setval() :-).)

If value caching is turned on, then each connection may grab them in
groups of (say) 100, so that one insert, on one (not-too-busy)
connection might add in 5399, and an insert on another connection,
that has been much busier, might add in 6522, and those values differ
quite a bit :-).

> Sometimes a business requirement is that a serial sequence never
> skips, e.g. when generating invoice/ticket/formal letter
> numbers. Would an INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1
> FROM t, ...) suffice, or must I install a trigger too to do
> additional checking?

This is a troublesome scenario...

1.  Your requirement makes it MUCH harder to deal with concurrent
updates efficiently.

That "SELECT MAX()" destroys the efficiency achieved by the use of
sequences.

2.  It may be difficult to avoid deadlocks of some sort.

Suppose several inserts take place more or less simultaneously.  In
that case, they might all get the same value of SELECT MAX(), and only
one of them could therefore succeed.  The others would get
"clotheslined" by the UNIQUE constraint, like a hapless fugitive that
runs into a tree branch, and you'll see transactions failing due to
concurrency.  Not a good thing.

Another possibiity would be to have _two_ fields, one, call it C1,
using a sequence, and the other, C2, which gets populated later.

Periodically, a process goes through and calculates CURR=SELECT
MAX(C2), and then walks through all of the records populated with
values in C1.  For each non-null C1, it assigns C2 based on the value
of CURR, and then empties C1.

That means that there is a period of time during which the "ultimate"
sequence value, C2, is not not populated, which might or might not be
a problem for your application.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "linuxfinances.info")
http://linuxfinances.info/info/linuxxian.html
Life's a duck, and then you sigh.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] guaranteeing that a sequence never skips (fwd)

2004-10-03 Thread Christopher Browne
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] ("Scott 
Marlowe") transmitted:
> On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
>> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
>> > > Am I correct to assume that SERIAL does not guarantee that a sequence 
>> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
>> > > 
>> > > Sometimes a business requirement is that a serial sequence never skips, 
>> > > e.g. when generating invoice/ticket/formal letter numbers. Would an 
>> > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, 
>> > > or must I install a trigger too to do additional checking?
>> > 
>> > You will have to lock the whole table and your parallel performance will
>> > be poor.
>> 
>> Locking the table isn't sufficient to guarantee that a sequence value
>> never skips.  What if a transaction fails and has to be rolled back?
>> 
>> I've written database systems that used pre-numbered checks, what's usually
>> necessary is to postpone the check-numbering phase until the number of
>> checks is finalized, so that there's not much chance of anything else 
>> causing a rollback.  
>> --
>
> I didn't mean to use a sequence, sorry for being vague. I meant this:
>
> lock table
> select max(idfield)+1
> insert new row
> disconnect.

Yeah, that'll work, so long as you're prepared to wait for the table
to be available.

I think I like my idea of putting in provisional values, and then
fixing them up later...

You could do this via a sequence thus:

 select setval('ourseq', 25000);  -- Make sure the sequence starts
  -- way high

 create index idf_250m on thistable(idfield) where idfield > 25000;
  -- Provide an efficient way to look up the entries that need
  -- to get reset

Then, every once in a while, a separate process would go in, see the
highest value on idfield < 250M, and rewrite the idfield on all of the
tuples where idfield > 250M.  It would be efficient due to the partial
index.  It limits the number of documents to 250M, but I'm sure that
can be alleviated when it turns into an issue...
-- 
output = reverse("gro.mca" "@" "enworbbc")
http://linuxfinances.info/info/nonrdbms.html
Would I be  an optimist or a  pessimist if I said my  bladder was half
full?

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Out of memory errors on OS X

2004-10-03 Thread Scott Ribe
> Given that they have improved their SysV IPC support steadily over the
> past few Darwin releases, I don't see why you'd expect them to not be
> willing to do this.  Having a larger default limit costs them *zero* if
> the feature is not used, so what's the objection?

The objection would be attitudinal. I detect a whiff of "that's sooo
obsolete, you should get with the program and do it our way instead" in
their docs...


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 665-7007 voice



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] guaranteeing that a sequence never skips (fwd)

2004-10-03 Thread Mike Nolan
> Then, every once in a while, a separate process would go in, see the
> highest value on idfield < 250M, and rewrite the idfield on all of the
> tuples where idfield > 250M.  It would be efficient due to the partial
> index.  It limits the number of documents to 250M, but I'm sure that
> can be alleviated when it turns into an issue...

I think you'd be better off using two columns.  Call the first one the
'work ticket' for the check request, and you don't really care if it has gaps
in it or not, its primary purpose is to ensure that each check request 
has a unique document number of some kind, so a sequence works fine. 

One and only one program assigns the actual check numbers--in a separate 
column.

That's the sort of thing that most commercial packages do, even though it
seems clumsy and adds an extra step, and that's why they do it that way, too.
--
Mike Nolan

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
Christopher Browne wrote:
Assuming that the tables in question aren't so large that they cause
mass eviction of buffers, it should suffice to do a plain VACUUM (and
NOT a "VACUUM FULL") on the tables in question quite frequently.

This is easy to say and almost impraticable. I run a 7.4.5 with the autovacuum:

pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8

I'm not very familiar at all with appropriate settings for autovacuum,
but doesn't the above say to vacuum a table only when the dead space
reaches 50%?  That seems awfully lax to me.  I've always thought one
should vacuum often enough to keep dead space to maybe 10 to 25%.
The problem is that I can not set these value per table and per database
so, I had to find some compromise, however I will test in the next days
what happen with -V 0.2
However each six hour I perform a vacuum on all database and the HD space
continue to grow even with FSM parameters large enough.
I'll post in a couple of day about the new settings.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Matthew T. O'Connor
On Sun, 2004-10-03 at 21:01, Gaetano Mendola wrote:
> Tom Lane wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> > 
> >>Christopher Browne wrote:
> >>pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
> > 
> > I'm not very familiar at all with appropriate settings for
autovacuum,
> > but doesn't the above say to vacuum a table only when the dead space
> > reaches 50%?  That seems awfully lax to me.  I've always thought one
> > should vacuum often enough to keep dead space to maybe 10 to 25%.

Yes that is what those options say.  The default values are even more
lax.  I wasn't sure how best to set them, I erred on the conservative
side.

> The problem is that I can not set these value per table and per
database
> so, I had to find some compromise, however I will test in the next
days
> what happen with -V 0.2
> 
> However each six hour I perform a vacuum on all database and the HD
space
> continue to grow even with FSM parameters large enough.

Since you are running autovacuum I doubt the doing vacuumdb -a -z is 3
times a day buying you much.  It's not a bad idea to do once in a while.

Given the way Postgres works, it is normal to have slack space in your
tables.  The real question is do your table stop growing?  At some point
you should reach a stead state where you have some percentage of slack
space that stops growing.

You said that after running for a week you have 400M of reclaimable
space.  Is that a problem?  If you don't do a vacuum full for two weeks
is it still 400M?  My guess is most of the 400M is created in the first
few hours (perhaps days) after running your vacuum full.

Matthew


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Matthew T. O'Connor wrote:
On Sun, 2004-10-03 at 21:01, Gaetano Mendola wrote:
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:

Christopher Browne wrote:
pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
I'm not very familiar at all with appropriate settings for
autovacuum,
but doesn't the above say to vacuum a table only when the dead space
reaches 50%?  That seems awfully lax to me.  I've always thought one
should vacuum often enough to keep dead space to maybe 10 to 25%.

Yes that is what those options say.  The default values are even more
lax.  I wasn't sure how best to set them, I erred on the conservative
side.

The problem is that I can not set these value per table and per
database
so, I had to find some compromise, however I will test in the next
days
what happen with -V 0.2
However each six hour I perform a vacuum on all database and the HD
space
continue to grow even with FSM parameters large enough.

Since you are running autovacuum I doubt the doing vacuumdb -a -z is 3
times a day buying you much.  It's not a bad idea to do once in a while.
The reason is that I have few tables of about 5 milion with ~ 1 insert per
day. Even with setting  -v 300 -V 0.1 this means these tables will be analyzed
each 50 days. So I  have to force it.
Regards
Gaetano Mendola









Given the way Postgres works, it is normal to have slack space in your
tables.  The real question is do your table stop growing?  At some point
you should reach a stead state where you have some percentage of slack
space that stops growing.
You said that after running for a week you have 400M of reclaimable
space.  Is that a problem?  If you don't do a vacuum full for two weeks
is it still 400M?  My guess is most of the 400M is created in the first
few hours (perhaps days) after running your vacuum full.
Matthew
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Application user login/management

2004-10-03 Thread Michael Glaesemann
Hello all,
Recently I've been thinking about different methods of managing users 
that log into a PostgreSQL-backed application. The users I'm thinking 
of are not necessarily DBAs: they're application users that really 
shouldn't even be aware that they are being served by the world's most 
advanced open source database server. I appreciate any thoughts or 
feedback people may have, as I'm trying to decide which is the most 
appropriate way to move forward.

Method 1: Use PostgreSQL users and groups.
All application users will (unknowingly) be PostgreSQL users as well. 
Restrict access for these users to prevent them from logging into the 
PostgreSQL server directly, and limit their access to the DB using the 
built-in PostgreSQL access privilege mechanism. Updates occur through 
functions; selects are against views or using set returning functions. 
This method leverages built-in functionality. Drawbacks I see are that 
PostgreSQL users are unique to a cluster, rather that the db. This 
means that once a user exists in one db, they exist in all of the dbs. 
There might be users with the same name in other dbs, so that name is 
no longer available (though of course this can also occur in a single 
db as well). Also, it may be desirable to let usernames be retired for 
one person, but the user is not deleted, for example if their data is 
still required even though they are no longer active. One might want to 
allow a new user to be able to use this username, i.e., active 
usernames would be unique, rather than usernames in general.

Method 2: Store username/password information as data in tables, using 
pgcrypto for authentication
In this scenario, middleware passes username/password combinations to 
PostgreSQL and functions within the database use contrib/pgcrypto to 
handle authentication. This allows a username to be 'retired' for one 
person and assigned to another. Another advantage is that using 
PostgreSQL functions for authentication mean that this doesn't need to 
be duplicated in middleware. A possible disadvantage is that it 
requires pgcrypto, though I don't know how much of a disadvantage this 
is, as it is a contrib library that ships with the standard PostgreSQL 
package.

Method 3: Store username/password information as data in tables, and 
use middleware for authentication
This seems to be the most popular method from what I've seen of open 
source packages. One reason for this may be that the middleware is 
designed to work with a number of different dbms backends, and 
different dbms' have different capabilities with respect to user 
management: it's just easier to take care of it in the middleware.

I lean towards the first and second methods, as I like to keep as much 
in the server as possible, and portability wrt the database server 
isn't as important to me as being able to develop different middleware 
against the same data.

Another thing on my mind is security. Any thoughts on the relative 
security of the three methods I've outlined above?

Thank you for any and all thoughts on this. I appreciate hearing 
other's views.

Regards,
Michael
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Application user login/management

2004-10-03 Thread Jason Sheets

Michael Glaesemann wrote:
Hello all,
Recently I've been thinking about different methods of managing users 
that log into a PostgreSQL-backed application. The users I'm thinking 
of are not necessarily DBAs: they're application users that really 
shouldn't even be aware that they are being served by the world's most 
advanced open source database server. I appreciate any thoughts or 
feedback people may have, as I'm trying to decide which is the most 
appropriate way to move forward.

Method 1: Use PostgreSQL users and groups.
All application users will (unknowingly) be PostgreSQL users as well. 
Restrict access for these users to prevent them from logging into the 
PostgreSQL server directly, and limit their access to the DB using the 
built-in PostgreSQL access privilege mechanism. Updates occur through 
functions; selects are against views or using set returning functions. 
This method leverages built-in functionality. Drawbacks I see are that 
PostgreSQL users are unique to a cluster, rather that the db. This 
means that once a user exists in one db, they exist in all of the dbs. 
There might be users with the same name in other dbs, so that name is 
no longer available (though of course this can also occur in a single 
db as well). Also, it may be desirable to let usernames be retired for 
one person, but the user is not deleted, for example if their data is 
still required even though they are no longer active. One might want 
to allow a new user to be able to use this username, i.e., active 
usernames would be unique, rather than usernames in general.
I've seen this method used successfully in some applications, I prefer 
to avoid using it as you must also create a PostgreSQL user for each 
application user.  Instead I use either method 2 or 3 for user 
authentication and then use method 1 to restrict the middleware's access 
to the database itself (don't give the application more access than it 
requires).

Method 2: Store username/password information as data in tables, using 
pgcrypto for authentication
In this scenario, middleware passes username/password combinations to 
PostgreSQL and functions within the database use contrib/pgcrypto to 
handle authentication. This allows a username to be 'retired' for one 
person and assigned to another. Another advantage is that using 
PostgreSQL functions for authentication mean that this doesn't need to 
be duplicated in middleware. A possible disadvantage is that it 
requires pgcrypto, though I don't know how much of a disadvantage this 
is, as it is a contrib library that ships with the standard PostgreSQL 
package.

If you are confident that (a.) you will either run the database server 
or (b.) have the authority to require that pgcrypto be installed on the 
database for all installations this may be a good solution.  Keep in 
mind you are limited to the encryption types supported by pgcrypto and 
moving to another database solution may be difficult.  I also can't 
comment on the availability of pgcrypto on Win32 but with PostgreSQL 8 
just around the corner the desire might be there to run the DB on 
Windows at some point.  libmcrypt is currently available in win32 but 
I've occasionally seen behavior differences with it on win32 v.s. Unix.

Also keep in mind that if you are not using encrypted database 
connections (using PostgreSQL's built in SSL support or SSH tunneling or 
another technique) you may be sending user's passwords across the 
network in plain text for the database to use.  I would either insure 
that all connections will be encrypted or preferably at  hash the 
password with at least SHA-1 on the application side and pass that as 
the password to the back-end, SHA-1 is available in almost all languages 
these days;  this technique may also remove the requirement of using 
pgcrypto on the back-end.

If you are going to use multiple interfaces to the application this may 
be the best choice as you don't have to re-implement the security system 
for each client application.

Method 3: Store username/password information as data in tables, and 
use middleware for authentication
This seems to be the most popular method from what I've seen of open 
source packages. One reason for this may be that the middleware is 
designed to work with a number of different dbms backends, and 
different dbms' have different capabilities with respect to user 
management: it's just easier to take care of it in the middleware.

I lean towards the first and second methods, as I like to keep as much 
in the server as possible, and portability wrt the database server 
isn't as important to me as being able to develop different middleware 
against the same data.

This is the technique I've used pretty often, it gives me very powerful 
application integration and allows me to more easily support different 
back-ends if the customer so chooses (I currently go with PostgreSQL and 
SQLite). The biggest drawback you've already touched is the system is 
implemented in

Re: [GENERAL] Application user login/management

2004-10-03 Thread Scott Marlowe
On Sun, 2004-10-03 at 22:23, Michael Glaesemann wrote:
> Hello all,
> 
> Recently I've been thinking about different methods of managing users 
> that log into a PostgreSQL-backed application. The users I'm thinking 
> of are not necessarily DBAs: they're application users that really 
> shouldn't even be aware that they are being served by the world's most 
> advanced open source database server. I appreciate any thoughts or 
> feedback people may have, as I'm trying to decide which is the most 
> appropriate way to move forward.

The method I worked with was similar to your method 3, of maintaining
the info in tables, but more complex, and easier to handle for large
numbers of users.

We built an OpenLDAP server and wrote some scripts to maintain that and
allow for group editing.  This structure existed completely outside of
the either the database or application.  Then, apache handled all the
authentication through ldap authentication.  The application was give
standard libs / includes that allowed for pushing a username up against
any group, etc... So that all the yes / no of being allowed somewhere or
allowed to do something was kept in the LDAP database.

This allowed us to allow owners of given groups to edit them by
themselves, i.e. the Director of Marketing could both add other junior
admins to the marketing groups, and could edit members of all the
marketing groups.  Note that these groups / authentication are then
accessible to all other applications in the company that are LDAP
aware.  And there's a lot of stuff that can work with LDAP and / or
apache/http auth against LDAP authentication.

This allows you to scale your authentication and group management
independently of any scaling issues with your application servers. 
Since single master / multi slave OpenLDAP is a pretty easy thing to
implement, the only applications that need to access the master can be
set to the ldap editing applications (group editor, update scripts,
etc...) while standard old authentication can be pointed at one or more
slaves.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [HACKERS] OT moving from MS SQL to PostgreSQL

2004-10-03 Thread Scott Marlowe
On Sun, 2004-10-03 at 06:33, stig erikson wrote:
> Hello.
> i have an slightly off topic question, but i hope that somebody might know.
> 
> at the moment we have a database on a MS SQL 7 server.
> This data will be transfered to PostgreSQL 7.4.5 or PostgreSQL 8 (when 
> it is released). so far so good.
> 
> the question now arises, this current database is used in web 
> application made with ASP on IIS5. The idea is to move the database and 
> the application to a linux or unix environment. Is there a tool that can 
> be used convert ASP pages into PHP (or any other language suitable for 
> linux/unix), or should we prepare to rewrite most of the code?
> 
> Is there a tool, some add-in to apache perhaps that can run ASP code on 
> linux/unix, this would help to have the system running while we recode 
> the application.

There are a few tools I've seen that will try to convert ASP to PHP, but
for the most part, they can't handle very complex code, so you're
probably better off just rewriting it and learning PHP on the way.

By the way, I have moved this over to -general, as this is quite off
topic for -hackers.  Next person to reply please remove the
pgsql-hackers address from the CC list please.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] GnuPG / PGP signed checksums for PostgreSQL 7.4.5, 7.4.4, 7.3.7, 7.3.6, 7.3.5. 7.2.5

2004-10-03 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


This is a PGP-signed copy of the checksums for following 
PostgreSQL versions:

7.4.5
7.4.4
7.3.7
7.3.6
7.3.5
7.2.5

The latest copy of the checksums for these and other versions, as well 
as information on how to verify the files you download for yourself, 
can be found at:

http://www.gtsm.com/postgres_sigs.html


## Created with md5sum:
97e750c8e69c208b75b6efedc5a36efb  postgresql-7.4.5.tar.bz2
bffc3fe775c885489f9071e97f43ab9b  postgresql-base-7.4.5.tar.bz2
548a73c898e65f901dbc06d622a2bc63  postgresql-docs-7.4.5.tar.bz2
8be416baeeb32518f2b17a91c4caafba  postgresql-opt-7.4.5.tar.bz2
73b8ee0f7ff0ca24cca50434b7276dc1  postgresql-test-7.4.5.tar.bz2
a68d368159319a620074e70d76fbd14b  postgresql-7.4.5.tar.gz
f18c3d6e88b0b7d7dfcccf06d2884bf9  postgresql-base-7.4.5.tar.gz
4caf0e0f3f094ac21e4b4ff5c49ef6e9  postgresql-docs-7.4.5.tar.gz
c23937f00f1d3421a9c2d3ba608d130c  postgresql-opt-7.4.5.tar.gz
86174904ccb9a2898836010b016183cf  postgresql-test-7.4.5.tar.gz

0433f4b34cbd16dd30e922cefa286db5  postgresql-7.4.4.tar.bz2
3c03ac47ecd7fad4c09bf1b0b223  postgresql-base-7.4.4.tar.bz2
6b32dd938322ae8a97504e42abb10697  postgresql-docs-7.4.4.tar.bz2
c9e073c292148bed6bc2b5e72ab5cdea  postgresql-opt-7.4.4.tar.bz2
444cf315b44f134c6f31292b49d2e1b1  postgresql-test-7.4.4.tar.bz2
c74d816f5d771fb1f835b43286251165  postgresql-7.4.4.tar.gz
1e21526c90a0b735d4d663fbdfa626be  postgresql-base-7.4.4.tar.gz
eec55a1b56fee236dbad271603db8ee2  postgresql-docs-7.4.4.tar.gz
83dd7baa3ce1f0194b6fee08f16dfdea  postgresql-opt-7.4.4.tar.gz
5c7e04dafa829b9dd7164d036b12ae4e  postgresql-test-7.4.4.tar.gz

8b34cebb1cf30a6a020e1075fee3567b  postgresql-7.3.7.tar.bz2
d64e0eca8025caa2e35e5d8226a1afbf  postgresql-base-7.3.7.tar.bz2
53bb62e0d7a302b0e626436001f9b242  postgresql-docs-7.3.7.tar.bz2
d3a8a078e786abc5b6c52fac2663a125  postgresql-opt-7.3.7.tar.bz2
6f1c9bc20c01d9490e9b24563dd942a2  postgresql-test-7.3.7.tar.bz2
8db987fb5b406433fb9e2146db91f38f  postgresql-7.3.7.tar.gz
89d0d3b083d2554365f2c8668fe4a136  postgresql-base-7.3.7.tar.gz
2a1f275fd9993511d3d959d5334c8e51  postgresql-docs-7.3.7.tar.gz
ce51b432ebd22e316a7d3636c5a10969  postgresql-opt-7.3.7.tar.gz
d2249ee1088161fef76de6635c065ce8  postgresql-test-7.3.7.tar.gz

6a36ee526dace32667b62f7216a4b9a6  postgresql-7.3.6.tar.bz2
80b1649458ed7b0e765fb19bcb81c7aa  postgresql-base-7.3.6.tar.bz2
ec0cf85996049eb0180a2163c482c02c  postgresql-docs-7.3.6.tar.bz2
49b6faa1698c6d9f357e13236f7ca777  postgresql-opt-7.3.6.tar.bz2
fb943f1f4ab837a57a477378ae135806  postgresql-test-7.3.6.tar.bz2
e29bd379789a59e061d5cf126024913f  postgresql-7.3.6.tar.gz
6d35055a09fdb86cbbb6f4e556e038ef  postgresql-base-7.3.6.tar.gz
f2504dbc83f7fd0aeb2cb956582b  postgresql-docs-7.3.6.tar.gz
31b706e2e95890682928dbb1138d6340  postgresql-opt-7.3.6.tar.gz
b72a9c4b9f69cb8d1cea2561a2d41930  postgresql-test-7.3.6.tar.gz

2dffe7425252a7e0efbc8acbc4931b73  postgresql-7.3.5.tar.bz2
071efb8cee72a62b4f0da478df39f08d  postgresql-base-7.3.5.tar.bz2
7354a7c9cc5f1203586131e454df270e  postgresql-docs-7.3.5.tar.bz2
d106ee6a4b0fb1d63eb51525ee8faed6  postgresql-opt-7.3.5.tar.bz2
eaf4977a5e81e6bf8abe66762ef9aab5  postgresql-test-7.3.5.tar.bz2
ef2751173050b97fad8592ce23525ddf  postgresql-7.3.5.tar.gz
dce1170ac37ba9a215c57bde3ad35465  postgresql-base-7.3.5.tar.gz
d95f5f07723a6e1439ed6b0109a53080  postgresql-docs-7.3.5.tar.gz
679baed3ea8f19e5584d49712d295801  postgresql-opt-7.3.5.tar.gz
163aca0105144396b0363985647fe72c  postgresql-test-7.3.5.tar.gz

9d5dfba26ab008eabf07e1d3ee842afa  postgresql-7.2.5.tar.bz2
4cf55aca4395152193daaf4cf7d9fb65  postgresql-base-7.2.5.tar.bz2
6cf65b607c3db65ea511cde7a3615e8c  postgresql-docs-7.2.5.tar.bz2
84dd18d1f91fa96176d4212a6f2fe383  postgresql-opt-7.2.5.tar.bz2
1178b42e221a729f70d7e73878b84479  postgresql-test-7.2.5.tar.bz2
8cda6488a8c4a7b579355f0d327e8a84  postgresql-7.2.5.tar.gz
332d0d5ba7d41614635b62466dbca026  postgresql-base-7.2.5.tar.gz
14f479be516e37ee7263557311f18da4  postgresql-docs-7.2.5.tar.gz
2fceaf41384e4d03b90ce2af59b0e247  postgresql-opt-7.2.5.tar.gz
b5365037381491cf1c87e5e7731f54bc  postgresql-test-7.2.5.tar.gz


## Created with sha1sum:
42582179398106fb9cfd5fac44f9fc7c614b07ef  postgresql-7.4.5.tar.bz2
446cea272dcca0327672a488dd2eee906589172d  postgresql-base-7.4.5.tar.bz2
4684efc72683fda9df5441adb35d383cd9d0ea51  postgresql-docs-7.4.5.tar.bz2
c50b329ad0e10cbf0782a1e4ba7bf4fcc788c45b  postgresql-opt-7.4.5.tar.bz2
9b686d23a11e5da5ece7967a05989644559fe342  postgresql-test-7.4.5.tar.bz2
a12f1f42b2893e3b89adf01fd49cad8c90cc89d5  postgresql-7.4.5.tar.gz
b4eb8c2b9da0b5dd82d3485d664109048c3dc454  postgresql-base-7.4.5.tar.gz
688ab4c418a7f656493a220ba74ae56788bf3e06  postgresql-docs-7.4.5.tar.gz
5b72af080269cff164c3357094963df87800c9f9  postgresql-opt-7.4.5.tar.gz
bfb75199e933eee88d520c3c600a9e83c8b58bf4  postgresql-test-7.4.5.tar.gz

3a9a91cfda9f80a8166027497f796bd662a32e0b  postgresql-7.4.4.tar.bz2
14f2890da563be57fc22c7246808327ce24725a7  postgresql-base-7.4.4.tar