Hello,
I have a massive table in my database, about 1.5TB and around 3.5 billion 
rows. (that is one partition of massive table). When I try to do backup, 
just on this one big table, after a while I get following error: 





pg_dump: Dumping the contents of table "ta_low" failed: PQgetResult()
failed.

pg_dump: Error message from server: ERROR:  compressed data is corrupted

pg_dump: The command was: COPY public.ta_low (foreign_id, name, numbers) TO
stdout;





where foreign_id is bigint, name is varchar(10) and extended storage,
numbers is quite long array of ints, also extended storage obviously. The 
table works normally, I guess there will be small number of rows that are 
corrupted. 




I tried using approach described here: https://no0p.github.io/postgresql/
2013/04/02/postgres-corruption-resolution.html 

However upon creating the checking function and running the query: select 
ctid from ta_low where chk(ta_low); I get another error, despite the
exception handling in the function:





ERROR:  compressed data is corrupted

CONTEXT:  PL/pgSQL function chk(anyelement) while storing call arguments
into local variables

********** Error **********




ERROR: compressed data is corrupted

SQL state: XX000

Context: PL/pgSQL function chk(anyelement) while storing call arguments into
local variables





I want to backup this table and possibly replace the corrupted rows,
although it is not top priority, and if single digits of rows are not
working out of the billions its not a big issue. I'm using Postgresql 9.5.
Any advice on how to find/replace/delete the corrupted rows is appreciated,
or how to backup the table. Thank you. 

Reply via email to