[GENERAL] Fast backup/restore

2006-10-16 Thread Gandalf
I am looking for a *fast* backup/restore tools for Postgres. I've found the current used tools pg_dump and pg_restore to be very slow on large databases (~30-40GB). Restore takes time in the tune of 6 hrs on a Linux, 4 proc, 32 G RAM machine which is not acceptable.

 
I am using "pg_dump -Fc" to take backup. I understand binary compression adds to the time, but there are other databases (like DB2) which take much less time on similar data sizes.
 
Are there faster tools available?
 
Thanks.
 


Re: [GENERAL] Fast backup/restore

2006-10-16 Thread Harald Armin Massa
Gandald,have a look athttp://momjian.us/main/writings/pgsql/administration.pdfpage 44ffThere are descriptions how to do database-backups at the speed of raw file system operations.
Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [GENERAL] Fast backup/restore

2006-10-16 Thread Jeff Davis
On Mon, 2006-10-16 at 16:29 +0530, Gandalf wrote:
> I am looking for a *fast* backup/restore tools for Postgres. I've
> found the current used tools pg_dump and pg_restore to be very slow on
> large databases (~30-40GB). Restore takes time in the tune of 6 hrs on
> a Linux, 4 proc, 32 G RAM machine which is not acceptable.
>  
> I am using "pg_dump -Fc" to take backup. I understand binary
> compression adds to the time, but there are other databases (like DB2)
> which take much less time on similar data sizes.
>  
> Are there faster tools available?
>  

http://www.postgresql.org/docs/8.1/static/backup-online.html

With that backup system, you can backup with normal filesystem-level
tools (e.g. tar) while the database is online.

Make sure to backup the remaining active WAL segments. Those are
necessary for the backup to be complete. This step will be done
automatically in 8.2.

If your filesystem has snapshot capability, you have nothing to worry
about. Just snapshot the fs and backup the data directory plus any WAL
segments and tablespaces.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fast backup/restore

2006-10-17 Thread mengel

We just tar/gzip the entire data directory.
 It takes all of 20 sec.  We've successfully restored from that
also.  The machine you are restoring to *must* be running the save
version of postgresql you backed up from.


Matthew Engel







Jeff Davis <[EMAIL PROTECTED]>

Sent by: [EMAIL PROTECTED]
10/16/2006 02:35 PM




To
Gandalf <[EMAIL PROTECTED]>


cc
pgsql-general@postgresql.org


Subject
Re: [GENERAL] Fast backup/restore








On Mon, 2006-10-16 at 16:29 +0530, Gandalf wrote:
> I am looking for a *fast* backup/restore tools for Postgres. I've
> found the current used tools pg_dump and pg_restore to be very slow
on
> large databases (~30-40GB). Restore takes time in the tune of 6 hrs
on
> a Linux, 4 proc, 32 G RAM machine which is not acceptable.
>  
> I am using "pg_dump -Fc" to take backup. I understand binary
> compression adds to the time, but there are other databases (like
DB2)
> which take much less time on similar data sizes.
>  
> Are there faster tools available?
>  

http://www.postgresql.org/docs/8.1/static/backup-online.html

With that backup system, you can backup with normal filesystem-level
tools (e.g. tar) while the database is online.

Make sure to backup the remaining active WAL segments. Those are
necessary for the backup to be complete. This step will be done
automatically in 8.2.

If your filesystem has snapshot capability, you have nothing to worry
about. Just snapshot the fs and backup the data directory plus any WAL
segments and tablespaces.

Regards,
                
Jeff Davis


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



Re: [GENERAL] Fast backup/restore

2006-10-17 Thread Tom Lane
[EMAIL PROTECTED] writes:
> We just tar/gzip the entire data directory.  It takes all of 20 sec. We've 
> successfully restored from that also.

You've been very lucky ... unless you stopped the postmaster while
taking the backup.  Without that, this method WILL screw you someday.

(But as long as you're willing to stop the postmaster, it's a perfectly
reasonable option.)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Fast backup/restore

2006-10-17 Thread Vivek Khera
On Oct 17, 2006, at 10:43 AM, [EMAIL PROTECTED] wrote:We just tar/gzip the entire data directory.  It takes all of 20 sec.  We've successfully restored from that also.  The machine you are restoring to *must* be running the save version of postgresql you backed up from. If you successfully backed up in 20 seconds, then you have a tiny DB.  Also, if you successfully restored from that style backup, your DB must not be written to much, or you were extremely lucky to get a consistent state.

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Fast backup/restore

2006-10-17 Thread Steve Poe
Vivek,What methods of backup do you recommend for medium to large databases? In our example, we have a 20GB database and it takes 2 hrs to load from a pg_dump file.Thanks.Steve Poe
On 10/17/06, Vivek Khera <[EMAIL PROTECTED]> wrote:
On Oct 17, 2006, at 10:43 AM, [EMAIL PROTECTED] wrote:

We just tar/gzip the entire data directory.  It takes all of 20 sec.  We've successfully restored from that also.  The machine you are restoring to *must* be running the save version of postgresql you backed up from.
 If you successfully backed up in 20 seconds, then you have a tiny DB.  Also, if you successfully restored from that style backup, your DB must not be written to much, or you were extremely lucky to get a consistent state.




Re: [GENERAL] Fast backup/restore

2006-10-17 Thread Vivek Khera


On Oct 17, 2006, at 2:35 PM, Steve Poe wrote:


Vivek,

What methods of backup do you recommend for medium to large  
databases? In our example, we have a 20GB database and it takes 2  
hrs to load from a pg_dump file.




my largest db is about 60Gb with indexes.  reloading the data (about  
30Gb) takes 1 hour from compressed format pg_dump, and another two to  
reindex.


for increasing reload size, bump your checkpoint_segments to  
something big, like 128 or 256 depending on how much disk space you  
can spare for it.





smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Fast backup/restore

2006-10-18 Thread Jim C. Nasby
On Tue, Oct 17, 2006 at 02:43:28PM -0400, Vivek Khera wrote:
> 
> On Oct 17, 2006, at 2:35 PM, Steve Poe wrote:
> 
> >Vivek,
> >
> >What methods of backup do you recommend for medium to large  
> >databases? In our example, we have a 20GB database and it takes 2  
> >hrs to load from a pg_dump file.
> >
> 
> my largest db is about 60Gb with indexes.  reloading the data (about  
> 30Gb) takes 1 hour from compressed format pg_dump, and another two to  
> reindex.
> 
> for increasing reload size, bump your checkpoint_segments to  
> something big, like 128 or 256 depending on how much disk space you  
> can spare for it.

Other hints  for restoring from pg_dump:

also increase wal_buffers
fsync=off
set maintenance_work_mem as high as you can (note that values over 1G
generally don't work).
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/