On Fri, Mar 29, 2024 at 12:00 PM Heikki Linnakangas <hlinn...@iki.fi> wrote:
>
> On 29/03/2024 07:04, Melanie Plageman wrote:
> > On Thu, Mar 28, 2024 at 11:07:10AM -0400, Melanie Plageman wrote:
> >> These comments could use another pass. I had added some extra
> >> (probably redundant) content when I thought I was refactoring it a
> >> certain way and then changed my mind.
> >>
> >> Attached is a diff with some ideas I had for a bit of code simplification.
> >>
> >> Are you working on cleaning this patch up or should I pick it up?
> >
> > Attached v9 is rebased over master. But, more importantly, I took
> > another pass at heap_prune_chain() and am pretty happy with what I came
> > up with. See 0021. I simplified the traversal logic and then grouped the
> > chain processing into three branches at the end. I find it much easier
> > to understand what we are doing for different types of HOT chains.
>
> Ah yes, agreed, that's nicer.
>
> The 'survivor' variable is a little confusing, especially here:
>
>         if (!survivor)
>         {
>                 int                     i;
>
>                 /*
>                  * If no DEAD tuple was found, and the root is redirected, 
> mark it as
>                  * such.
>                  */
>                 if ((i = ItemIdIsRedirected(rootlp)))
>                         heap_prune_record_unchanged_lp_redirect(prstate, 
> rootoffnum);
>
>                 /* the rest of tuples in the chain are normal, unchanged 
> tuples */
>                 for (; i < nchain; i++)
>                         heap_prune_record_unchanged(dp, prstate, 
> chainitems[i]);
>         }
>
> You would think that "if(!survivor)", it means that there is no live
> tuples on the chain, i.e. no survivors. But in fact it's the opposite;
> it means that the are all live. Maybe call it 'ndeadchain' instead,
> meaning the number of dead items in the chain.

Makes sense.

> > I've done that in my version. While testing this, I found that only
> > on-access pruning needed this final loop calling record_unchanged() on
> > items not yet marked. I know we can't skip this final loop entirely in
> > the ON ACCESS case because it calls record_prunable(), but we could
> > consider moving that back out into heap_prune_chain()? Or what do you
> > think?
>
> Hmm, why is that different with on-access pruning?

Well, it is highly possible we just don't hit cases like this with
vacuum in our test suite (not that it is unreachable by vacuum).

It's just very easy to get in this situation with on-access pruning.
Imagine an UPDATE which caused the following chain:

RECENTLY_DEAD -> DELETE_IN_PROGRESS -> INSERT_IN_PROGRESS

It invokes heap_page_prune_and_freeze() (assume the page meets the
criteria for on-access pruning) and eventually enters
heap_prune_chain() with the first offset in this chain.

The first item is LP_NORMAL and the tuple is RECENTLY_DEAD, so the
survivor variable stays 0 and we record_unchanged() for that tuple and
return. The next two items are LP_NORMAL and the tuples are HOT
tuples, so we just return from the "fast path" at the top of
heap_prune_chain(). After invoking heap_prune_chain() for all of them,
the first offset is marked but the other two are not. Thus, we end up
having to record_unchanged() later. This kind of thing is a super
common case that we see all the time in queries in the regression test
suite.

> Here's another idea: In the first loop through the offsets, where we
> gather the HTSV status of each item, also collect the offsets of all HOT
> and non-HOT items to two separate arrays. Call heap_prune_chain() for
> all the non-HOT items first, and then process any remaining HOT tuples
> that haven't been marked yet.

That's an interesting idea. I'll try it out and see how it works.

> > I haven't finished updating all the comments, but I am really interested
> > to know what you think about heap_prune_chain() now.
>
> Looks much better now, thanks!

I am currently doing chain traversal refactoring in heap_prune_chain()
on top of master as the first patch in the set.

- Melanie


Reply via email to