On 14 jan, 09:08, [EMAIL PROTECTED] (Ashish Karalkar) wrote:
> Hello list members,
> I hav a table with 140M rows. While I am trying to select the count from the 
> table
> I am getting following error
> ERROR:  shared buffer hash table corrupted
> Can anybody please suggest me wht had gone wrong and how to fix it?
> PostgreSQL 8.2.4
> OS:Suse 10.3
> With Regards
> Ashish...Save all your chat conversations.Find them online.

I had too many problems with transaction log corruption and table
corruption in a linux 2.6 kernel server with bad memory banks..

It does not showed the same error message on shared buffers, but I
could fix it by changing the memory banks to ones of same vendor,
speed and latency, and after this, I did the following steps (each one
in the exactly order):

1- Dropped out every database object that was part of DDL (Views,
Indexes, Functions, etc). Of course you'll need the scripts to
recreate it later;
2- executed REINDEX DATABASE xxxx on each database of cluster;
3- executed a '$ vacuumdb -vfz' against the databases;
4- pg_dumpall into a backup script file of all databases (steps 2 and
3 are only for validation);
5- removed the data path of postgres cluster (PGDATA);
6- recreated a new postgres cluster and restore the pg_dumpall script
on it;
7- Re-run the schema definition to create database objects.

As you can see, I was tightly lucky for the corruption stay on indexes
and other objects. If the table data got corrupted... the story could
be another, and you get errors on steps 2 and 3.

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to