"Mark Cave-Ayland" <[EMAIL PROTECTED]> writes:
>> Alternatively, we might say that 64-bit CRC was overkill from 
>> day one, and we'd rather get the additional 10% or 20% or so 
>> speedup.  I'm kinda leaning in that direction, but only weakly.

> What would you need to persuade you either way? I believe that disk drives
> use CRCs internally to verify that the data has been read correctly from
> each sector. If the majority of the errors would be from a disk failure,
> then a corrupt sector would have to pass the drive CRC *and* the PostgreSQL
> CRC in order for an XLog entry to be considered valid. I would have thought
> the chances of this being able to happen would be reasonably small and so
> even with CRC32 this can be detected fairly accurately.

It's not really a matter of backstopping the hardware's error detection;
if we were trying to do that, we'd keep a CRC for every data page, which
we don't.  The real reason for the WAL CRCs is as a reliable method of
identifying the end of WAL: when the "next record" doesn't checksum you
know it's bogus.  This is a nontrivial point because of the way that we
re-use WAL files --- the pages beyond the last successfully written page
aren't going to be zeroes, they'll be filled with random WAL data.

Personally I think CRC32 is plenty for this job, but there were those
arguing loudly for CRC64 back when we made the decision originally ...

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to