Network Administrator <[EMAIL PROTECTED]> writes:
>>> PANIC:  read of clog file 5, offset 16384 failed: Success

>> Hm, not good :-(.  What files actually exist in $PGDATA/pg_clog/ (names
>> and sizes)?

> -rw-------    1 postgres users      262144 Jul 20 15:53 0000
> -rw-------    1 postgres users      262144 Jul 31 12:57 0001
> -rw-------    1 postgres users      262144 Aug 12 17:32 0002
> -rw-------    1 postgres users      262144 Aug 26 00:15 0003
> -rw-------    1 postgres users      262144 Sep  9 23:44 0004
> -rw-------    1 postgres users       16384 Sep 10 21:21 0005

Okay, it's trying to read off the end of the clog, no doubt looking for
a transaction number just slightly larger than what's known to clog.
This probably indicates more serious problems (because WAL replay really
should have prevented such an inconsistency), but you can get past the
immediate panic relatively easily: just append an 8k page of zeroes to
clog.  Assuming your system has /dev/zero, something like this should
do it:

        dd bs=8k count=1 < /dev/zero >> $PGDATA/pg_clog/0005

(do this with care of course, and you should probably shut down the
postmaster first).  You might possibly have to add more than one page,
if you then get similar PANICs with larger offsets, but try one page
for starters.

If this does suppress the failure messages, you are still not really out
of the woods; you should do what you can to check for data consistency.
A paranoid person would probably take a complete pg_dump and try to diff
it against the last known good dump.  At the very least, I'd treat the
table involved in the problem with great suspicion.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to