Lamar Owen wrote:
> 
> On Friday 06 July 2001 18:51, Naomi Walker wrote:
> > If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> > might cause us to hit the limit?
> 
> Since PostgreSQL automatically segments its internal data files to get around
> such limits, the only place you will hit this limit will be when making
> backups using pg_dump or pg_dumpall.  You may need to pipe the output of

Speaking of which.

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state.  Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

Reply via email to