Hi, Apologies, I forgot to respond to the second part of your message.
On 09/06/2017 09:45 AM, Haribabu Kommi wrote: > > While testing this patch, I found another problem that is not related to > this patch. When the vacuum command is executed mutiple times on > a table with no dead rows, the number of reltuples value is slowly > reducing. > > postgres=# select reltuples, n_live_tup, n_dead_tup > from pg_stat_user_tables join pg_class using (relname) > where relname = 't'; > reltuples | n_live_tup | n_dead_tup > -----------+------------+------------ > 899674 | 899674 | 0 > (1 row) > > postgres=# vacuum t; > VACUUM > postgres=# select reltuples, n_live_tup, n_dead_tup > from pg_stat_user_tables join pg_class using (relname) > where relname = 't'; > reltuples | n_live_tup | n_dead_tup > -----------+------------+------------ > 899622 | 899622 | 0 > (1 row) > > postgres=# vacuum t; > VACUUM > postgres=# select reltuples, n_live_tup, n_dead_tup > from pg_stat_user_tables join pg_class using (relname) > where relname = 't'; > reltuples | n_live_tup | n_dead_tup > -----------+------------+------------ > 899570 | 899570 | 0 > (1 row) > > > In lazy_scan_heap() function, we force to scan the last page of the > relation to avoid the access exclusive lock in lazy_truncate_heap if > there are tuples in the last page. Because of this reason, the > scanned_pages value will never be 0, so the vac_estimate_reltuples > function will estimate the tuples based on the number of tuples from > the last page of the relation. This estimation is leading to reduce > the number of retuples. > Hmmm, that's annoying. Perhaps if we should not update the values in this case, then? I mean, if we only scan the last page, how reliable the derived values are? For the record - AFAICS this issue is unrelated to do with the patch (i.e. it's not introduced by it). > I am thinking whether this problem really happen in real world > scenarios to produce a fix? > Not sure. As vacuum run decrements the query only a little bit, so you'd have to run the vacuum many times to be actually bitten by it. For people relying on autovacuum that won't happen, as it only runs on tables with certain number of dead tuples. So you'd have to be running VACUUM in a loop or something (but not VACUUM ANALYZE, because that works fine) from a script, or something like that. That being said, fixing a bug is always a good thing I guess. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers