Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Or, we could store only the delta between current record and the > previous one. Assuming we know what the current record is, that wouldn't > lose any precision. That way xl_prev only needs to be as big as the > biggest possible WAL record we can have.
The trouble with either approach is that it discards forensic intelligence in the name of bit squeezing. The high bits of xl_prev are the only direct evidence *within* a WAL record of where it thinks it is in the WAL sequence, and the back-comparison against where we thought the previous record was is correspondingly the only really strong protection against a torn page problem within a WAL page, should the sector boundary happen to fall exactly at a WAL record boundary. I fear that a delta would be completely unacceptable for that check, because it's entirely possible that a lot of different WAL records would be the same size (consider bulk load into a fixed-width table for an example). Simon's scheme merely removes some of the protection, not all of it ;-). But I don't really like removing any of it. If we need more bits in WAL headers, then so be it --- they'd likely still be smaller than they were a couple releases ago. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers