On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote:
> 
> On May 21, 2013, at 5:18 PM, Jeison Bedoya <jeis...@audifarma.com.co> wrote:
> 
> > Hi people, i have a database with 400GB running in a server with 128Gb RAM, 
> > and 32 cores, and storage over SAN with fiberchannel, the problem is when i 
> > go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a 
> > restore and take a lot of 17 hours, that is a normal time for that process 
> > in that machine? or i can do something to optimize the process of 
> > backup/restore.
> > 
> 
> I'd recommend you to dump with 
> 
> pg_dump --format=c
> 
> It will compress the output and later you can restore it in parallel with
> 
> pg_restore -j 32 (for example)
> 
> Right now you can not dump in parallel, wait for 9.3 release. Or may be 
> someone will back port it to 9.2 pg_dump.
> 
> Also during restore you can speed up a little more by disabling fsync and 
> synchronous_commit. 
> 

If you have the space and I/O capacity, avoiding the compress option will be
much faster. The current compression scheme using zlib type compression is
very CPU intensive and limits your dump rate. On a system that we have, a
dump without compression takes 20m and with compression 2h20m. The parallel
restore make a big difference as well.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to