[GENERAL] 7.1 dumps with large objects

2001-04-14 Thread David Wall

Wonderful job on getting 7.1 released.  I've just installed it in place of a
7.1beta4 database, with the great advantage of not even having to migrate
the database.

It seems that 7.1 is able to handle large objects in its dump/restore
natively now and no longer requires the use of the contrib program to dump
them.  Large objects are represented by OIDs in the table schema, and I'm
trying to make sure that I understand the process correctly from what I've
read in the admin guide and comand reference guide.

In my case, the OID does not mean anything to my programs, and they are not
used as keys.  So I presume that I don't really care about preserving OIDs.
Does this just mean that if I restore a blob, it will get a new OID, but
otherwise everything will be okay?

This is my plan of attack:

To backup my database (I have several databases running in a single
postgresql server, and I'd like to be able to back them up separately since
they could move from one machine to another as the loads increase), I'll be
using:

pg_dump -b -Fc dbname  dbname.dump

Then, to restore, I'd use:

pg_restore -d dbname dbname.dump

Is that going to work for me?

I also noted that pg_dump has a -Z level specifier for compression.  When
not specified, the backup showed a compression level of "-1" (using
pg_restore -l).  Is that the highest compression level, or does that mean it
was disabled?  I did note that the -Fc option created a file that was larger
than a plain file, and not anywhere near as small as if I gzip'ed the
output.  In my case, it's a very small test database, so I don't know if
that's the reason, or whether -Fc by itself doesn't really compress unless
the -Z option is used.

And for -Z, is 0 or 9 the highest level compression?  Is there a particular
value that's generally considered the best tradeoff in terms of speed versus
space?

Thanks,
David


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] 7.1 dumps with large objects

2001-04-14 Thread Tom Larard

On Sat, 14 Apr 2001, David Wall wrote:
 It seems that 7.1 is able to handle large objects in its dump/restore
 natively now and no longer requires the use of the contrib program to dump
 them.  Large objects are represented by OIDs in the table schema, and I'm
 trying to make sure that I understand the process correctly from what I've
 read in the admin guide and comand reference guide.

Hmmn, as you clearly know how to dump blobs in the old versions, can you
tell me how to do it, or point me in the direction of the 'contrib'
program that you spoke of?

Thanks


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly