On Wed, Mar 29, 2017 at 2:23 AM, Masahiko Sawada <sawada.m...@gmail.com> wrote:
> I was thinking that the status of this patch is still "Needs review"
> because I sent latest version patch[1].

I think you're right.

I took a look at this today.  I think there is some problem with the
design of this patch.  I originally proposed a threshold based on the
percentage of not-all-visible pages on the theory that we'd just skip
looking at the indexes altogether in that case.  But that's not what
the patch does: it only avoids the index *cleanup*, not the index
*vacuum*.  And the comments in btvacuumcleanup say this:

    /*
     * If btbulkdelete was called, we need not do anything, just return the
     * stats from the latest btbulkdelete call.  If it wasn't called, we must
     * still do a pass over the index, to recycle any newly-recyclable pages
     * and to obtain index statistics.
     *
     * Since we aren't going to actually delete any leaf items, there's no
     * need to go through all the vacuum-cycle-ID pushups.
     */

So, if I'm reading this correctly, the only time this patch saves
substantial work - at least in the case of a btree index - is in the
case where there are no dead tuples at all.  But if we only want to
avoid the work in that case, then a threshold based on the percentage
of all-visible pages seems like the wrong thing, because the other
stuff btvacuumcleanup() is doing doesn't have anything to do with the
number of all-visible pages.

I'm not quite sure what the right thing to do is here, but I'm
doubtful that this is it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to