Nikhil,

* Nikhil G Daddikar (n...@celoxis.com) wrote:
> We use PostgreSQL 9 on our production server and I was wondering if
> there there is a way to know when pages get corrupted. 

It's not great, but there are a few options.  First is to use pg_dump
across the entire database and monitor the PG logs to see if it barfs
about anything.  Another thing that you can do is to write a script
which pulls out all of the data from each table using an ORDER BY which
matches some index on the table- PG will, generally, use an in-order
index traversal, which will validate the index and the heap, again, to
some extent.

> I see that
> there is some kind of checksum maintained from 9.3 but till then is
> there a way to be notified quickly when such a thing happens? I use
> a basebackup+rsync of WAL files as a disaster recovery solution.
> Will this be useful when such a scenario occurs?

It really depends.  Having multiple backups over time will limit the
risk that corruption gets propagated to a slave system.  Also, there is
a CRC on the WAL records which are shipped, which helps a bit, but there
are still cases where corruption can get you.  The best thing is to have
frequent, tested, backups.

        Thanks,
                
                Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to