gokulnathbabu manoharan <[EMAIL PROTECTED]> writes:
> In my sample databases the relfilenode for pg_class
> was 1259.  So I checked the block number 190805 of the
> 1259 file.  Since the block size is 8K, 1259 was in
> two files 1259 & 1259.1.  The block number 190805
> falls in the second file whose block number is
> 58733((190805 - (1G/8K)) = 58733).

You've got a pg_class catalog exceeding a gigabyte??
Apparently you've been exceedingly lax about vacuuming.
You need to do something about that, because it's surely
hurting performance.

You did the math wrong --- the damaged block would be 59733, not
58733, which is why pg_filedump isn't noticing anything wrong here.

It seems almost certain that there are only dead rows in the
damaged block, so it'd be sufficient to zero out the block,
either manually with dd or by turning on zero_damaged_pages.
After that I'd recommend a dump, initdb, reload, since there may
be other damage you don't know about.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to