The server did crash yesterday. It has many schemas that have the same table 
(different shards). Operations done on these tables are very similar, but only 
a few of them actually have this problem. Other than this error, we do not see 
other abnormalities on the box.

Lisa

From: Tom Lane <t...@sss.pgh.pa.us<mailto:t...@sss.pgh.pa.us>>
Date: Friday, January 16, 2015 at 4:23 PM
To: s <l...@fb.com<mailto:l...@fb.com>>
Cc: "pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>" 
<pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>>
Subject: Re: [HACKERS] n_live_tup smaller than the number of rows in a table

Lisa Guo <l...@fb.com<mailto:l...@fb.com>> writes:
We are seeing a strange behavior where n_live_tup is way smaller than the 
number of rows in a table. The table has > 18m rows, but n_live_tup only has < 
100K. We tried to do vacuum analyze to clear up any sticky errors, but it 
didn’t correct the problem. We are running Postgres 9.2. Any pointers on how we 
could debug this problem and how to correct the stats?

n_live_tup is a moving average over the last few observations, so in
theory it should get better if you repeat ANALYZE several times.
AFAIR, VACUUM isn't likely to help much.  (VACUUM FREEZE might though.)

It seems odd that you have a value that's so far off ... have you been
using this table in any unusual way?

regards, tom lane

Reply via email to