I would also be interested in any "creative" ways to reduce the size and
time to backup databases/clusters. We were just having a conversation
about this yesterday. We were mulling over things like using rsync to
only backup files in the database directory tree that actually changed.
Or maybe doing a selective backup of files based on modified times, etc,
but were unsure if this would be a safe, reliable way to backup a
reduced set of data.

Doug Knight
WSI Inc.
Andover, MA
 
On Fri, 2007-02-09 at 12:45 +0530, [EMAIL PROTECTED] wrote:
> 
> Hi Folks, 
> 
> We have a requirement to deal with large databases of the size
> Terabytes when we go into production. What is the best database
> back-up mechanism and possible issues? 
> 
> pg_dump can back-up database but the dump file is limited by OS
> file-size limit. What about the option of compressing the dump file?
> How much time does it generally take for large databases? I heard,
> that it would be way too long (even one or days). I haven't tried it
> out, though. 
> 
> What about taking zipped back-up of the database directory? We tried
> this out but the checkpoint data in pg_xlogs directory is also being
> backed-up. Since these logs keeps on increasing from day1 of database
> creation, the back_up size if increasing drastically. 
> Can we back-up certain subdirectories without loss of information or
> consistency..? 
> 
> Any quick comments/suggestions in this regard would be very helpful. 
> 
> Thanks in advance, 
> Ravi Kumar Mandala

Reply via email to