Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-11 Thread Melanie Plageman
v3 attached

On Thu, Jan 4, 2024 at 3:23 PM Andres Freund  wrote:
>
> Hi,
>
> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
> > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var 
> > rel_pages
> > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
> >  nskippable_blocks
>
> I think these may lead to worse code - the compiler has to reload
> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it
> can't guarantee that they're not changed within one of the external functions
> called in the loop body.

I buy that for 0001 but 0002 is still using local variables.
nskippable_blocks was just another variable to keep track of even
though we could already get that info from local variables
next_unskippable_block and next_block.

In light of this comment, I've refactored 0003/0004 (0002 and 0003 in
this version [v3]) to use local variables in the loop as well. I had
started using the members of the VacSkipState which I introduced.

> > Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state
> >
> > Future commits will remove all skipping logic from lazy_scan_heap() and
> > confine it to lazy_scan_skip(). To make those commits more clear, first
> > introduce the struct, VacSkipState, which will maintain the variables
> > needed to skip ranges less than SKIP_PAGES_THRESHOLD.
>
> Why not add this to LVRelState, possibly as a struct embedded within it?

Done in attached.

> > From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman 
> > Date: Sat, 30 Dec 2023 16:59:27 -0500
> > Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip
>
> > By always calling lazy_scan_skip() -- instead of only when we have reached 
> > the
> > next unskippable block, we no longer need the skipping_current_range 
> > variable.
> > lazy_scan_heap() no longer needs to manage the skipped range -- checking if 
> > we
> > reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()
> > can derive the visibility status of a block from whether or not we are in a
> > skippable range -- that is, whether or not the next_block is equal to the 
> > next
> > unskippable block.
>
> I wonder if it should be renamed as part of this - the name is somewhat
> confusing now (and perhaps before)? lazy_scan_get_next_block() or such?

Why stop there! I've removed lazy and called it
heap_vac_scan_get_next_block() -- a little long, but...

> > + while (true)
> >   {
> >   Buffer  buf;
> >   Pagepage;
> > - boolall_visible_according_to_vm;
> >   LVPagePruneState prunestate;
> >
> > - if (blkno == vacskip.next_unskippable_block)
> > - {
> > - /*
> > -  * Can't skip this page safely.  Must scan the page.  
> > But
> > -  * determine the next skippable range after the page 
> > first.
> > -  */
> > - all_visible_according_to_vm = 
> > vacskip.next_unskippable_allvis;
> > - lazy_scan_skip(vacrel, &vacskip, blkno + 1);
> > -
> > - Assert(vacskip.next_unskippable_block >= blkno + 1);
> > - }
> > - else
> > - {
> > - /* Last page always scanned (may need to set 
> > nonempty_pages) */
> > - Assert(blkno < rel_pages - 1);
> > -
> > - if (vacskip.skipping_current_range)
> > - continue;
> > + blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,
> > +
> > &all_visible_according_to_vm);
> >
> > - /* Current range is too small to skip -- just scan 
> > the page */
> > - all_visible_according_to_vm = true;
> > - }
> > + if (blkno == InvalidBlockNumber)
> > + break;
> >
> >   vacrel->scanned_pages++;
> >
>
> I don't like that we still do determination about the next block outside of
> lazy_scan_skip() and have duplicated exit conditions between lazy_scan_skip()
> and lazy_scan_heap().
>
> I'd probably change the interface to something like
>
> while (lazy_scan_get_next_block(vacrel, &blkno))
> {
> ...
> }

I've done this. I do now find the parameter names a bit confusing.
There is next_block (which is the "next block in line" and is an input
parameter) and blkno, which is an output parameter with the next block
that should actually be processed. Maybe it's okay?

> > From b6603e35147c4bbe3337280222e6243524b0110e Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman 
> > Date: Sun, 31 Dec 2023 09:47:18 -0500
> > Subject: [PATCH v2 5/6] VacSkipState saves reference to LVRelState
> >
> > The streaming read interface can only give pgsr_next callbacks access to
> > two pieces of private data. As such, move a reference to 

Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-11 Thread Melanie Plageman
On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz  wrote:
>
> On Fri, 5 Jan 2024 at 02:25, Jim Nasby  wrote:
> >
> > On 1/4/24 2:23 PM, Andres Freund wrote:
> >
> > On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
> >
> > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var 
> > rel_pages
> > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
> >  nskippable_blocks
> >
> > I think these may lead to worse code - the compiler has to reload
> > vacrel->rel_pages/next_unskippable_block for every loop iteration, because 
> > it
> > can't guarantee that they're not changed within one of the external 
> > functions
> > called in the loop body.
> >
> > Admittedly I'm not up to speed on recent vacuum changes, but I have to 
> > wonder if the concept of skipping should go away in the context of vector 
> > IO? Instead of thinking about "we can skip this range of blocks", why not 
> > maintain a list of "here's the next X number of blocks that we need to 
> > vacuum"?
>
> Sorry if I misunderstood. AFAIU, with the help of the vectored IO;
> "the next X number of blocks that need to be vacuumed" will be
> prefetched by calculating the unskippable blocks ( using the
> lazy_scan_skip() function ) and the X will be determined by Postgres
> itself. Do you have something different in your mind?

I think you are both right. As we gain more control of readahead from
within Postgres, we will likely want to revisit this heuristic as it
may not serve us anymore. But the streaming read interface/vectored
I/O is also not a drop-in replacement for it. To change anything and
ensure there is no regression, we will probably have to do
cross-platform benchmarking, though.

That being said, I would absolutely love to get rid of the skippable
ranges because I find them very error-prone and confusing. Hopefully
now that the skipping logic is isolated to a single function, it will
be easier not to trip over it when working on lazy_scan_heap().

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-12 Thread Jim Nasby

On 1/11/24 5:50 PM, Melanie Plageman wrote:

On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz  wrote:


On Fri, 5 Jan 2024 at 02:25, Jim Nasby  wrote:


On 1/4/24 2:23 PM, Andres Freund wrote:

On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:

Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages
Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
  nskippable_blocks

I think these may lead to worse code - the compiler has to reload
vacrel->rel_pages/next_unskippable_block for every loop iteration, because it
can't guarantee that they're not changed within one of the external functions
called in the loop body.

Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder if the concept of 
skipping should go away in the context of vector IO? Instead of thinking about "we can skip 
this range of blocks", why not maintain a list of "here's the next X number of blocks 
that we need to vacuum"?


Sorry if I misunderstood. AFAIU, with the help of the vectored IO;
"the next X number of blocks that need to be vacuumed" will be
prefetched by calculating the unskippable blocks ( using the
lazy_scan_skip() function ) and the X will be determined by Postgres
itself. Do you have something different in your mind?


I think you are both right. As we gain more control of readahead from
within Postgres, we will likely want to revisit this heuristic as it
may not serve us anymore. But the streaming read interface/vectored
I/O is also not a drop-in replacement for it. To change anything and
ensure there is no regression, we will probably have to do
cross-platform benchmarking, though.

That being said, I would absolutely love to get rid of the skippable
ranges because I find them very error-prone and confusing. Hopefully
now that the skipping logic is isolated to a single function, it will
be easier not to trip over it when working on lazy_scan_heap().


Yeah, arguably it's just a matter of semantics, but IMO it's a lot 
clearer to simply think in terms of "here's the next blocks we know we 
want to vacuum" instead of "we vacuum everything, but sometimes we skip 
some blocks".

--
Jim Nasby, Data Architect, Austin TX





Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-12 Thread Melanie Plageman
On Fri, Jan 12, 2024 at 2:02 PM Jim Nasby  wrote:
>
> On 1/11/24 5:50 PM, Melanie Plageman wrote:
> > On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz  wrote:
> >>
> >> On Fri, 5 Jan 2024 at 02:25, Jim Nasby  wrote:
> >>>
> >>> On 1/4/24 2:23 PM, Andres Freund wrote:
> >>>
> >>> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
> >>>
> >>> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var 
> >>> rel_pages
> >>> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
> >>>   nskippable_blocks
> >>>
> >>> I think these may lead to worse code - the compiler has to reload
> >>> vacrel->rel_pages/next_unskippable_block for every loop iteration, 
> >>> because it
> >>> can't guarantee that they're not changed within one of the external 
> >>> functions
> >>> called in the loop body.
> >>>
> >>> Admittedly I'm not up to speed on recent vacuum changes, but I have to 
> >>> wonder if the concept of skipping should go away in the context of vector 
> >>> IO? Instead of thinking about "we can skip this range of blocks", why not 
> >>> maintain a list of "here's the next X number of blocks that we need to 
> >>> vacuum"?
> >>
> >> Sorry if I misunderstood. AFAIU, with the help of the vectored IO;
> >> "the next X number of blocks that need to be vacuumed" will be
> >> prefetched by calculating the unskippable blocks ( using the
> >> lazy_scan_skip() function ) and the X will be determined by Postgres
> >> itself. Do you have something different in your mind?
> >
> > I think you are both right. As we gain more control of readahead from
> > within Postgres, we will likely want to revisit this heuristic as it
> > may not serve us anymore. But the streaming read interface/vectored
> > I/O is also not a drop-in replacement for it. To change anything and
> > ensure there is no regression, we will probably have to do
> > cross-platform benchmarking, though.
> >
> > That being said, I would absolutely love to get rid of the skippable
> > ranges because I find them very error-prone and confusing. Hopefully
> > now that the skipping logic is isolated to a single function, it will
> > be easier not to trip over it when working on lazy_scan_heap().
>
> Yeah, arguably it's just a matter of semantics, but IMO it's a lot
> clearer to simply think in terms of "here's the next blocks we know we
> want to vacuum" instead of "we vacuum everything, but sometimes we skip
> some blocks".

Even "we vacuum some stuff, but sometimes we skip some blocks" would
be okay. What we have now is "we vacuum some stuff, but sometimes we
skip some blocks, but only if we would skip enough blocks, and, when
we decide to do that we can't go back and actually get visibility
information for those blocks we skipped because we are too cheap"

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-26 Thread vignesh C
On Fri, 12 Jan 2024 at 05:12, Melanie Plageman
 wrote:
>
> v3 attached
>
> On Thu, Jan 4, 2024 at 3:23 PM Andres Freund  wrote:
> >
> > Hi,
> >
> > On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
> > > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var 
> > > rel_pages
> > > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
> > >  nskippable_blocks
> >
> > I think these may lead to worse code - the compiler has to reload
> > vacrel->rel_pages/next_unskippable_block for every loop iteration, because 
> > it
> > can't guarantee that they're not changed within one of the external 
> > functions
> > called in the loop body.
>
> I buy that for 0001 but 0002 is still using local variables.
> nskippable_blocks was just another variable to keep track of even
> though we could already get that info from local variables
> next_unskippable_block and next_block.
>
> In light of this comment, I've refactored 0003/0004 (0002 and 0003 in
> this version [v3]) to use local variables in the loop as well. I had
> started using the members of the VacSkipState which I introduced.
>
> > > Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state
> > >
> > > Future commits will remove all skipping logic from lazy_scan_heap() and
> > > confine it to lazy_scan_skip(). To make those commits more clear, first
> > > introduce the struct, VacSkipState, which will maintain the variables
> > > needed to skip ranges less than SKIP_PAGES_THRESHOLD.
> >
> > Why not add this to LVRelState, possibly as a struct embedded within it?
>
> Done in attached.
>
> > > From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001
> > > From: Melanie Plageman 
> > > Date: Sat, 30 Dec 2023 16:59:27 -0500
> > > Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip
> >
> > > By always calling lazy_scan_skip() -- instead of only when we have 
> > > reached the
> > > next unskippable block, we no longer need the skipping_current_range 
> > > variable.
> > > lazy_scan_heap() no longer needs to manage the skipped range -- checking 
> > > if we
> > > reached the end in order to then call lazy_scan_skip(). And 
> > > lazy_scan_skip()
> > > can derive the visibility status of a block from whether or not we are in 
> > > a
> > > skippable range -- that is, whether or not the next_block is equal to the 
> > > next
> > > unskippable block.
> >
> > I wonder if it should be renamed as part of this - the name is somewhat
> > confusing now (and perhaps before)? lazy_scan_get_next_block() or such?
>
> Why stop there! I've removed lazy and called it
> heap_vac_scan_get_next_block() -- a little long, but...
>
> > > + while (true)
> > >   {
> > >   Buffer  buf;
> > >   Pagepage;
> > > - boolall_visible_according_to_vm;
> > >   LVPagePruneState prunestate;
> > >
> > > - if (blkno == vacskip.next_unskippable_block)
> > > - {
> > > - /*
> > > -  * Can't skip this page safely.  Must scan the 
> > > page.  But
> > > -  * determine the next skippable range after the 
> > > page first.
> > > -  */
> > > - all_visible_according_to_vm = 
> > > vacskip.next_unskippable_allvis;
> > > - lazy_scan_skip(vacrel, &vacskip, blkno + 1);
> > > -
> > > - Assert(vacskip.next_unskippable_block >= blkno + 1);
> > > - }
> > > - else
> > > - {
> > > - /* Last page always scanned (may need to set 
> > > nonempty_pages) */
> > > - Assert(blkno < rel_pages - 1);
> > > -
> > > - if (vacskip.skipping_current_range)
> > > - continue;
> > > + blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,
> > > +
> > > &all_visible_according_to_vm);
> > >
> > > - /* Current range is too small to skip -- just scan 
> > > the page */
> > > - all_visible_according_to_vm = true;
> > > - }
> > > + if (blkno == InvalidBlockNumber)
> > > + break;
> > >
> > >   vacrel->scanned_pages++;
> > >
> >
> > I don't like that we still do determination about the next block outside of
> > lazy_scan_skip() and have duplicated exit conditions between 
> > lazy_scan_skip()
> > and lazy_scan_heap().
> >
> > I'd probably change the interface to something like
> >
> > while (lazy_scan_get_next_block(vacrel, &blkno))
> > {
> > ...
> > }
>
> I've done this. I do now find the parameter names a bit confusing.
> There is next_block (which is the "next block in line" and is an input
> parameter) and blkno, which is an output parameter with the next block
> that should actually be processed. Maybe it's okay?
>
> > > From b6603e35147c4bbe3337280222e624352

Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-29 Thread Melanie Plageman
On Fri, Jan 26, 2024 at 8:28 AM vignesh C  wrote:
>
> CFBot shows that the patch does not apply anymore as in [1]:
> === applying patch
> ./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch
> patching file src/backend/access/heap/vacuumlazy.c
> ...
> Hunk #10 FAILED at 1042.
> Hunk #11 FAILED at 1121.
> Hunk #12 FAILED at 1132.
> Hunk #13 FAILED at 1161.
> Hunk #14 FAILED at 1172.
> Hunk #15 FAILED at 1194.
> ...
> 6 out of 21 hunks FAILED -- saving rejects to file
> src/backend/access/heap/vacuumlazy.c.rej
>
> Please post an updated version for the same.
>
> [1] - http://cfbot.cputube.org/patch_46_4755.log

Fixed in attached rebased v4

- Melanie
From b71fbf1e9abea4689b57d9439ecdcc4387e91195 Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Sun, 31 Dec 2023 12:49:56 -0500
Subject: [PATCH v4 4/4] Remove unneeded vacuum_delay_point from
 heap_vac_scan_get_next_block

heap_vac_scan_get_next_block() does relatively little work, so there is
no need to call vacuum_delay_point(). A future commit will call
heap_vac_scan_get_next_block() from a callback, and we would like to
avoid calling vacuum_delay_point() in that callback.
---
 src/backend/access/heap/vacuumlazy.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index e5988262611..ea270941379 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1190,8 +1190,6 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
  */
 skipsallvis = true;
 			}
-
-			vacuum_delay_point();
 		}
 
 		vacrel->skip.next_unskippable_block = next_unskippable_block;
-- 
2.37.2

From 5b39165dde6e60ed214bc988eeb58fb9d357030c Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Sat, 30 Dec 2023 16:59:27 -0500
Subject: [PATCH v4 3/4] Confine vacuum skip logic to lazy_scan_skip

In preparation for vacuum to use the streaming read interface (and
eventually AIO), refactor vacuum's logic for skipping blocks such that
it is entirely confined to lazy_scan_skip(). This turns lazy_scan_skip()
and the skip state in LVRelState it uses into an iterator which yields
blocks to lazy_scan_heap(). Such a structure is conducive to an async
interface. While we are at it, rename lazy_scan_skip() to
heap_vac_scan_get_next_block(), which now more accurately describes it.

By always calling heap_vac_scan_get_next_block() -- instead of only when
we have reached the next unskippable block, we no longer need the
skipping_current_range variable. lazy_scan_heap() no longer needs to
manage the skipped range -- checking if we reached the end in order to
then call heap_vac_scan_get_next_block(). And
heap_vac_scan_get_next_block() can derive the visibility status of a
block from whether or not we are in a skippable range -- that is,
whether or not the next_block is equal to the next unskippable block.
---
 src/backend/access/heap/vacuumlazy.c | 243 ++-
 1 file changed, 126 insertions(+), 117 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 077164896fb..e5988262611 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -212,8 +212,8 @@ typedef struct LVRelState
 	int64		missed_dead_tuples; /* # removable, but not removed */
 
 	/*
-	 * Parameters maintained by lazy_scan_skip() to manage skipping ranges of
-	 * pages greater than SKIP_PAGES_THRESHOLD.
+	 * Parameters maintained by heap_vac_scan_get_next_block() to manage
+	 * skipping ranges of pages greater than SKIP_PAGES_THRESHOLD.
 	 */
 	struct
 	{
@@ -238,7 +238,9 @@ typedef struct LVSavedErrInfo
 
 /* non-export function prototypes */
 static void lazy_scan_heap(LVRelState *vacrel);
-static void lazy_scan_skip(LVRelState *vacrel, BlockNumber next_block);
+static bool heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
+		 BlockNumber *blkno,
+		 bool *all_visible_according_to_vm);
 static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
    BlockNumber blkno, Page page,
    bool sharelock);
@@ -820,8 +822,11 @@ static void
 lazy_scan_heap(LVRelState *vacrel)
 {
 	BlockNumber rel_pages = vacrel->rel_pages,
-blkno,
 next_fsm_block_to_vacuum = 0;
+	bool		all_visible_according_to_vm;
+
+	/* relies on InvalidBlockNumber overflowing to 0 */
+	BlockNumber blkno = InvalidBlockNumber;
 	VacDeadItems *dead_items = vacrel->dead_items;
 	const int	initprog_index[] = {
 		PROGRESS_VACUUM_PHASE,
@@ -836,40 +841,17 @@ lazy_scan_heap(LVRelState *vacrel)
 	initprog_val[2] = dead_items->max_items;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
+	vacrel->skip.next_unskippable_block = InvalidBlockNumber;
 	vacrel->skip.vmbuffer = InvalidBuffer;
-	/* Set up an initial range of skippable blocks using the visibility map */
-	lazy_scan_skip(vacrel, 0);
-	for (blkno = 0; blkno < rel_pages; blkno++)

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-06 Thread Heikki Linnakangas

On 27/02/2024 21:47, Melanie Plageman wrote:

The attached v5 has some simplifications when compared to v4 but takes
largely the same approach.

0001-0004 are refactoring


I'm looking at just these 0001-0004 patches for now. I like those 
changes a lot for the sake of readablity even without any of the later 
patches.


I made some further changes. I kept them as separate commits for easier 
review, see the commit messages for details. Any thoughts on those changes?


I feel heap_vac_scan_get_next_block() function could use some love. 
Maybe just some rewording of the comments, or maybe some other 
refactoring; not sure. But I'm pretty happy with the function signature 
and how it's called.


BTW, do we have tests that would fail if we botched up 
heap_vac_scan_get_next_block() so that it would skip pages incorrectly, 
for example? Not asking you to write them for this patch, but I'm just 
wondering.


--
Heikki Linnakangas
Neon (https://neon.tech)
From 6e0258c3e31e85526475f46b2e14cbbcbb861909 Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Sat, 30 Dec 2023 16:30:59 -0500
Subject: [PATCH v6 1/9] lazy_scan_skip remove unneeded local var
 nskippable_blocks

nskippable_blocks can be easily derived from next_unskippable_block's
progress when compared to the passed in next_block.
---
 src/backend/access/heap/vacuumlazy.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 8b320c3f89a..1dc6cc8e4db 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1103,8 +1103,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
 			   bool *next_unskippable_allvis, bool *skipping_current_range)
 {
 	BlockNumber rel_pages = vacrel->rel_pages,
-next_unskippable_block = next_block,
-nskippable_blocks = 0;
+next_unskippable_block = next_block;
 	bool		skipsallvis = false;
 
 	*next_unskippable_allvis = true;
@@ -1161,7 +1160,6 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
 
 		vacuum_delay_point();
 		next_unskippable_block++;
-		nskippable_blocks++;
 	}
 
 	/*
@@ -1174,7 +1172,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
 	 * non-aggressive VACUUMs.  If the range has any all-visible pages then
 	 * skipping makes updating relfrozenxid unsafe, which is a real downside.
 	 */
-	if (nskippable_blocks < SKIP_PAGES_THRESHOLD)
+	if (next_unskippable_block - next_block < SKIP_PAGES_THRESHOLD)
 		*skipping_current_range = false;
 	else
 	{
-- 
2.39.2

From 258bce44e3275bc628bf984892797eecaebf0404 Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Sat, 30 Dec 2023 16:22:12 -0500
Subject: [PATCH v6 2/9] Add lazy_scan_skip unskippable state to LVRelState

Future commits will remove all skipping logic from lazy_scan_heap() and
confine it to lazy_scan_skip(). To make those commits more clear, first
introduce add a struct to LVRelState containing variables needed to skip
ranges less than SKIP_PAGES_THRESHOLD.

lazy_scan_prune() and lazy_scan_new_or_empty() can now access the
buffer containing the relevant block of the visibility map through the
LVRelState.skip, so it no longer needs to be a separate function
parameter.

While we are at it, add additional information to the lazy_scan_skip()
comment, including descriptions of the role and expectations for its
function parameters.
---
 src/backend/access/heap/vacuumlazy.c | 154 ---
 1 file changed, 90 insertions(+), 64 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 1dc6cc8e4db..0ddb986bc03 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -204,6 +204,22 @@ typedef struct LVRelState
 	int64		live_tuples;	/* # live tuples remaining */
 	int64		recently_dead_tuples;	/* # dead, but not yet removable */
 	int64		missed_dead_tuples; /* # removable, but not removed */
+
+	/*
+	 * Parameters maintained by lazy_scan_skip() to manage skipping ranges of
+	 * pages greater than SKIP_PAGES_THRESHOLD.
+	 */
+	struct
+	{
+		/* Next unskippable block */
+		BlockNumber next_unskippable_block;
+		/* Buffer containing next unskippable block's visibility info */
+		Buffer		vmbuffer;
+		/* Next unskippable block's visibility status */
+		bool		next_unskippable_allvis;
+		/* Whether or not skippable blocks should be skipped */
+		bool		skipping_current_range;
+	}			skip;
 } LVRelState;
 
 /* Struct for saving and restoring vacuum error information. */
@@ -214,19 +230,15 @@ typedef struct LVSavedErrInfo
 	VacErrPhase phase;
 } LVSavedErrInfo;
 
-
 /* non-export function prototypes */
 static void lazy_scan_heap(LVRelState *vacrel);
-static BlockNumber lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer,
-  BlockNumber next_block,
-  bool *next_unskippable_allvis,
-  bool *skipping_current_range);

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-06 Thread Melanie Plageman
On Tue, Feb 27, 2024 at 02:47:03PM -0500, Melanie Plageman wrote:
> On Mon, Jan 29, 2024 at 8:18 PM Melanie Plageman
>  wrote:
> >
> > On Fri, Jan 26, 2024 at 8:28 AM vignesh C  wrote:
> > >
> > > CFBot shows that the patch does not apply anymore as in [1]:
> > > === applying patch
> > > ./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch
> > > patching file src/backend/access/heap/vacuumlazy.c
> > > ...
> > > Hunk #10 FAILED at 1042.
> > > Hunk #11 FAILED at 1121.
> > > Hunk #12 FAILED at 1132.
> > > Hunk #13 FAILED at 1161.
> > > Hunk #14 FAILED at 1172.
> > > Hunk #15 FAILED at 1194.
> > > ...
> > > 6 out of 21 hunks FAILED -- saving rejects to file
> > > src/backend/access/heap/vacuumlazy.c.rej
> > >
> > > Please post an updated version for the same.
> > >
> > > [1] - http://cfbot.cputube.org/patch_46_4755.log
> >
> > Fixed in attached rebased v4
> 
> In light of Thomas' update to the streaming read API [1], I have
> rebased and updated this patch set.
> 
> The attached v5 has some simplifications when compared to v4 but takes
> largely the same approach.

Attached is a patch set (v5a) which updates the streaming read user for
vacuum to fix an issue Andrey Borodin pointed out to me off-list.

Note that I started writing this email before seeing Heikki's upthread
review [1], so I will respond to that in a bit. There are no changes in
v5a to any of the prelim refactoring patches which Heikki reviewed in
that email. I only changed the vacuum streaming read users (last two
patches in the set).

Back to this patch set:
Andrey pointed out that it was failing to compile on windows and the
reason is that I had accidentally left an undefined variable "index" in
these places

Assert(index > 0);
...
ereport(DEBUG2,
(errmsg("table \"%s\": removed %lld dead item 
identifiers in %u pages",
vacrel->relname, (long long) index, 
vacuumed_pages)));

See https://cirrus-ci.com/task/6312305361682432

I don't understand how this didn't warn me (or fail to compile) for an
assert build on my own workstation. It seems to think "index" is a
function?

Anyway, thinking about what the correct assertion would be here:

Assert(index > 0);
Assert(vacrel->num_index_scans > 1 ||
   (rbstate->end_idx == vacrel->lpdead_items &&
vacuumed_pages == vacrel->lpdead_item_pages));

I think I can just replace "index" with "rbstate->end_index". At the end
of reaping, this should have the same value that index would have had.
The issue with this is if pg_streaming_read_buffer_get_next() somehow
never returned a valid buffer (there were no dead items), then rbstate
would potentially be uninitialized. The old assertion (index > 0) would
only have been true if there were some dead items, but there isn't an
explicit assertion in this function that there were some dead items.
Perhaps it is worth adding this? Even if we add this, perhaps it is
unacceptable from a programming standpoint to use rbstate in that scope?

In addition to fixing this slip-up, I have done some performance testing
for streaming read vacuum. Note that these tests are for both vacuum
passes (1 and 2) using streaming read.

Performance results:

The TL;DR of my performance results is that streaming read vacuum is
faster. However there is an issue with the interaction of the streaming
read code and the vacuum buffer access strategy which must be addressed.

Note that "master" in the results below is actually just a commit on my
branch [2] before the one adding the vacuum streaming read users. So it
includes all of my refactoring of the vacuum code from the preliminary
patches.

I tested two vacuum "data states". Both are relatively small tables
because the impact of streaming read can easily be seen even at small
table sizes. DDL for both data states is at the end of the email.

The first data state is a 28 MB table which has never been vacuumed and
has one or two dead tuples on every block. All of the blocks have dead
tuples, so all of the blocks must be vacuumed. We'll call this the
"sequential" data state.

The second data state is a 67 MB table which has been vacuumed and then
a small percentage of the blocks (non-consecutive blocks at irregular
intervals) are updated afterward. Because the visibility map has been
updated and only a few blocks have dead tuples, large ranges of blocks
do not need to be vacuumed. There is at least one run of blocks with
dead tuples larger than 1 block but most of the blocks with dead tuples
are a single block followed by many blocks with no dead tuples. We'll
call this the "few" data state.

I tested these data states with "master" and with streaming read vacuum
with three caching options:

- table data fully in shared buffers (via pg_prewarm)
- table data in the kernel buffer cache but not in shared buffers
- table data completely uncached

I tested the OS cached and uncached caching options with bo

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-06 Thread Melanie Plageman
On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:
> On 27/02/2024 21:47, Melanie Plageman wrote:
> > The attached v5 has some simplifications when compared to v4 but takes
> > largely the same approach.
> > 
> > 0001-0004 are refactoring
> 
> I'm looking at just these 0001-0004 patches for now. I like those changes a
> lot for the sake of readablity even without any of the later patches.

Thanks! And thanks so much for the review!

I've done a small performance experiment comparing a branch with all of
the patches applied (your v6 0001-0009) with master. I made an 11 GB
table that has 1,394,328 blocks. For setup, I vacuumed it to update the
VM and made sure it was entirely in shared buffers. All of this was to
make sure all of the blocks would be skipped and we spend the majority
of the time spinning through the lazy_scan_heap() code. Then I ran
vacuum again (the actual test). I saw vacuum go from 13 ms to 10 ms
with the patches applied.

I think I need to do some profiling to see if the difference is actually
due to our code changes, but I thought I would share preliminary
results.

> I made some further changes. I kept them as separate commits for easier
> review, see the commit messages for details. Any thoughts on those changes?

I've given some inline feedback on most of the extra patches you added.
Short answer is they all seem fine to me except I have a reservations
about 0008 because of the number of blkno variables flying around. I
didn't have a chance to rebase these into my existing changes today, so
either I will do it tomorrow or, if you are feeling like you're on a
roll and want to do it, that also works!

> I feel heap_vac_scan_get_next_block() function could use some love. Maybe
> just some rewording of the comments, or maybe some other refactoring; not
> sure. But I'm pretty happy with the function signature and how it's called.

I was wondering if we should remove the "get" and just go with
heap_vac_scan_next_block(). I didn't do that originally because I didn't
want to imply that the next block was literally the sequentially next
block, but I think maybe I was overthinking it.

Another idea is to call it heap_scan_vac_next_block() and then the order
of the words is more like the table AM functions that get the next block
(e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to
be too similar to those since this isn't a table AM callback.

As for other refactoring and other rewording of comments and such, I
will take a pass at this tomorrow.

> BTW, do we have tests that would fail if we botched up
> heap_vac_scan_get_next_block() so that it would skip pages incorrectly, for
> example? Not asking you to write them for this patch, but I'm just
> wondering.

So, while developing this, when I messed up and skipped blocks I
shouldn't, vacuum would error out with the "found xmin from before
relfrozenxid" error -- which would cause random tests to fail. I know
that's not a correctly failing test of this code. I think there might be
some tests in the verify_heapam tests that could/do test this kind of
thing but I don't remember them failing for me during development -- so
I didn't spend much time looking at them.

I would also sometimes get freespace or VM tests that would fail because
those blocks that are incorrectly skipped were meant to be reflected in
the FSM or VM in those tests.

All of that is to say, perhaps we should write a more targeted test?

When I was writing the code, I added logging of skipped blocks and then
came up with different scenarios and ran them on master and with the
patch and diffed the logs.

> From b4047b941182af0643838fde056c298d5cc3ae32 Mon Sep 17 00:00:00 2001
> From: Heikki Linnakangas 
> Date: Wed, 6 Mar 2024 20:13:42 +0200
> Subject: [PATCH v6 5/9] Remove unused 'skipping_current_range' field
> 
> ---
>  src/backend/access/heap/vacuumlazy.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/src/backend/access/heap/vacuumlazy.c 
> b/src/backend/access/heap/vacuumlazy.c
> index 65d257aab83..51391870bf3 100644
> --- a/src/backend/access/heap/vacuumlazy.c
> +++ b/src/backend/access/heap/vacuumlazy.c
> @@ -217,8 +217,6 @@ typedef struct LVRelState
>   Buffer  vmbuffer;
>   /* Next unskippable block's visibility status */
>   boolnext_unskippable_allvis;
> - /* Whether or not skippable blocks should be skipped */
> - boolskipping_current_range;
>   }   skip;
>  } LVRelState;
>  
> -- 
> 2.39.2
> 

Oops! I thought I removed this. I must have forgotten

> From 27e431e8dc69bbf09d831cb1cf2903d16f177d74 Mon Sep 17 00:00:00 2001
> From: Heikki Linnakangas 
> Date: Wed, 6 Mar 2024 20:58:57 +0200
> Subject: [PATCH v6 6/9] Move vmbuffer back to a local varible in
>  lazy_scan_heap()
> 
> It felt confusing that we passed around the current block, 'blkno', as
> an argument to lazy_scan_new_or_empty() and lazy_scan_prune()

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-07 Thread Melanie Plageman
On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:
> On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:
> > I made some further changes. I kept them as separate commits for easier
> > review, see the commit messages for details. Any thoughts on those changes?
> 
> I've given some inline feedback on most of the extra patches you added.
> Short answer is they all seem fine to me except I have a reservations
> about 0008 because of the number of blkno variables flying around. I
> didn't have a chance to rebase these into my existing changes today, so
> either I will do it tomorrow or, if you are feeling like you're on a
> roll and want to do it, that also works!

Attached v7 contains all of the changes that you suggested plus some
additional cleanups here and there.

> > I feel heap_vac_scan_get_next_block() function could use some love. Maybe
> > just some rewording of the comments, or maybe some other refactoring; not
> > sure. But I'm pretty happy with the function signature and how it's called.

I've cleaned up the comments on heap_vac_scan_next_block() in the first
couple patches (not so much in the streaming read user). Let me know if
it addresses your feelings or if I should look for other things I could
change.

I will say that now all of the variable names are *very* long. I didn't
want to remove the "state" from LVRelState->next_block_state. (In fact, I
kind of miss the "get". But I had to draw the line somewhere.) I think
without "state" in the name, next_block sounds too much like a function.

Any ideas for shortening the names of next_block_state and its members
or are you fine with them?

> I was wondering if we should remove the "get" and just go with
> heap_vac_scan_next_block(). I didn't do that originally because I didn't
> want to imply that the next block was literally the sequentially next
> block, but I think maybe I was overthinking it.
> 
> Another idea is to call it heap_scan_vac_next_block() and then the order
> of the words is more like the table AM functions that get the next block
> (e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to
> be too similar to those since this isn't a table AM callback.

I've done a version of this.

> > From 27e431e8dc69bbf09d831cb1cf2903d16f177d74 Mon Sep 17 00:00:00 2001
> > From: Heikki Linnakangas 
> > Date: Wed, 6 Mar 2024 20:58:57 +0200
> > Subject: [PATCH v6 6/9] Move vmbuffer back to a local varible in
> >  lazy_scan_heap()
> > 
> > It felt confusing that we passed around the current block, 'blkno', as
> > an argument to lazy_scan_new_or_empty() and lazy_scan_prune(), but
> > 'vmbuffer' was accessed directly in the 'scan_state'.
> > 
> > It was also a bit vague, when exactly 'vmbuffer' was valid. Calling
> > heap_vac_scan_get_next_block() set it, sometimes, to a buffer that
> > might or might not contain the VM bit for 'blkno'. But other
> > functions, like lazy_scan_prune(), assumed it to contain the correct
> > buffer. That was fixed up visibilitymap_pin(). But clearly it was not
> > "owned" by heap_vac_scan_get_next_block(), like the other 'scan_state'
> > fields.
> > 
> > I moved it back to a local variable, like it was. Maybe there would be
> > even better ways to handle it, but at least this is not worse than
> > what we have in master currently.
> 
> I'm fine with this. I did it the way I did (grouping it with the
> "next_unskippable_block" in the skip struct), because I think that this
> vmbuffer is always the buffer containing the VM bit for the next
> unskippable block -- which sometimes is the block returned by
> heap_vac_scan_get_next_block() and sometimes isn't.
> 
> I agree it might be best as a local variable but perhaps we could retain
> the comment about it being the block of the VM containing the bit for the
> next unskippable block. (Honestly, the whole thing is very confusing).

In 0001-0004 I've stuck with only having the local variable vmbuffer in
lazy_scan_heap().

In 0006 (introducing pass 1 vacuum streaming read user) I added a
vmbuffer back to the next_block_state (while also keeping the local
variable vmbuffer in lazy_scan_heap()). The vmbuffer in lazy_scan_heap()
contains the block of the VM containing visi information for the next
unskippable block or for the current block if its visi information
happens to be in the same block of the VM as either 1) the next
unskippable block or 2) the most recently processed heap block.

Streaming read vacuum separates this visibility check in
heap_vac_scan_next_block() from the main loop of lazy_scan_heap(), so we
can't just use a local variable anymore. Now the local variable vmbuffer
in lazy_scan_heap() will only already contain the block with the visi
information for the to-be-processed block if it happens to be in the
same VM block as the most recently processed heap block. That means
potentially more VM fetches.

However, by adding a vmbuffer to next_block_state, the callback may be
able to avoid extra VM fetches from one i

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Heikki Linnakangas

On 08/03/2024 02:46, Melanie Plageman wrote:

On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:

I feel heap_vac_scan_get_next_block() function could use some love. Maybe
just some rewording of the comments, or maybe some other refactoring; not
sure. But I'm pretty happy with the function signature and how it's called.


I've cleaned up the comments on heap_vac_scan_next_block() in the first
couple patches (not so much in the streaming read user). Let me know if
it addresses your feelings or if I should look for other things I could
change.


Thanks, that is better. I think I now finally understand how the 
function works, and now I can see some more issues and refactoring 
opportunities :-).


Looking at current lazy_scan_skip() code in 'master', one thing now 
caught my eye (and it's the same with your patches):



*next_unskippable_allvis = true;
while (next_unskippable_block < rel_pages)
{
uint8   mapbits = visibilitymap_get_status(vacrel->rel,

   next_unskippable_block,

   vmbuffer);

if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)
{
Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
*next_unskippable_allvis = false;
break;
}

/*
 * Caller must scan the last page to determine whether it has 
tuples
 * (caller must have the opportunity to set 
vacrel->nonempty_pages).
 * This rule avoids having lazy_truncate_heap() take 
access-exclusive
 * lock on rel to attempt a truncation that fails anyway, just 
because
 * there are tuples on the last page (it is likely that there 
will be
 * tuples on other nearby pages as well, but those can be 
skipped).
 *
 * Implement this by always treating the last block as unsafe 
to skip.
 */
if (next_unskippable_block == rel_pages - 1)
break;

/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */
if (!vacrel->skipwithvm)
{
/* Caller shouldn't rely on all_visible_according_to_vm 
*/
*next_unskippable_allvis = false;
break;
}

/*
 * Aggressive VACUUM caller can't skip pages just because they 
are
 * all-visible.  They may still skip all-frozen pages, which 
can't
 * contain XIDs < OldestXmin (XIDs that aren't already frozen 
by now).
 */
if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)
{
if (vacrel->aggressive)
break;

/*
 * All-visible block is safe to skip in non-aggressive 
case.  But
 * remember that the final range contains such a block 
for later.
 */
skipsallvis = true;
}

/* XXX: is it OK to remove this? */
vacuum_delay_point();
next_unskippable_block++;
nskippable_blocks++;
}


Firstly, it seems silly to check DISABLE_PAGE_SKIPPING within the loop. 
When DISABLE_PAGE_SKIPPING is set, we always return the next block and 
set *next_unskippable_allvis = false regardless of the visibility map, 
so why bother checking the visibility map at all?


Except at the very last block of the relation! If you look carefully, 
at the last block we do return *next_unskippable_allvis = true, if the 
VM says so, even if DISABLE_PAGE_SKIPPING is set. I think that's wrong. 
Surely the intention was to pretend that none of the VM bits were set if 
DISABLE_PAGE_SKIPPING is used, also for the last block.


This was changed in commit 980ae17310:


@@ -1311,7 +1327,11 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, 
BlockNumber next_block,
 
/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */

if (!vacrel->skipwithvm)
+   {
+   /* Caller shouldn't rely on all_visible_according_to_vm 
*/
+   *next_unskippable_allvis = false;
break;
+   }


Before that, *next_unskippable_allvis was set correctly according to the 
VM, even when DISABLE_PAGE_SKIPPING was used. It's not clear to me why 
that was changed. And I think setting it to 'true' would be a more 
failsafe value than 'false'. When *next_unskippable_allvis is set to 
true, the caller cannot rely on it because a concurrent modifi

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Peter Geoghegan
On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas  wrote:
> ISTM we should revert the above hunk, and backpatch it to v16. I'm a
> little wary because I don't understand why that change was made in the
> first place, though. I think it was just an ill-advised attempt at
> tidying up the code as part of the larger commit, but I'm not sure.
> Peter, do you remember?

I think that it makes sense to set the VM when indicated by
lazy_scan_prune, independent of what either the visibility map or the
page's PD_ALL_VISIBLE marking say. The whole point of
DISABLE_PAGE_SKIPPING is to deal with VM corruption, after all.

In retrospect I didn't handle this particular aspect very well in
commit 980ae17310. The approach I took is a bit crude (and in any case
slightly wrong in that it is inconsistent in how it handles the last
page). But it has the merit of fixing the case where we just have the
VM's all-frozen bit set for a given block (not the all-visible bit
set) -- which is always wrong. There was good reason to be concerned
about that possibility when 980ae17310 went in.

--
Peter Geoghegan




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas  wrote:
>
> On 08/03/2024 02:46, Melanie Plageman wrote:
> > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:
> >>> I feel heap_vac_scan_get_next_block() function could use some love. Maybe
> >>> just some rewording of the comments, or maybe some other refactoring; not
> >>> sure. But I'm pretty happy with the function signature and how it's 
> >>> called.
> >
> > I've cleaned up the comments on heap_vac_scan_next_block() in the first
> > couple patches (not so much in the streaming read user). Let me know if
> > it addresses your feelings or if I should look for other things I could
> > change.
>
> Thanks, that is better. I think I now finally understand how the
> function works, and now I can see some more issues and refactoring
> opportunities :-).
>
> Looking at current lazy_scan_skip() code in 'master', one thing now
> caught my eye (and it's the same with your patches):
>
> >   *next_unskippable_allvis = true;
> >   while (next_unskippable_block < rel_pages)
> >   {
> >   uint8   mapbits = 
> > visibilitymap_get_status(vacrel->rel,
> > 
> >  next_unskippable_block,
> > 
> >  vmbuffer);
> >
> >   if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)
> >   {
> >   Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
> >   *next_unskippable_allvis = false;
> >   break;
> >   }
> >
> >   /*
> >* Caller must scan the last page to determine whether it has 
> > tuples
> >* (caller must have the opportunity to set 
> > vacrel->nonempty_pages).
> >* This rule avoids having lazy_truncate_heap() take 
> > access-exclusive
> >* lock on rel to attempt a truncation that fails anyway, 
> > just because
> >* there are tuples on the last page (it is likely that there 
> > will be
> >* tuples on other nearby pages as well, but those can be 
> > skipped).
> >*
> >* Implement this by always treating the last block as unsafe 
> > to skip.
> >*/
> >   if (next_unskippable_block == rel_pages - 1)
> >   break;
> >
> >   /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */
> >   if (!vacrel->skipwithvm)
> >   {
> >   /* Caller shouldn't rely on 
> > all_visible_according_to_vm */
> >   *next_unskippable_allvis = false;
> >   break;
> >   }
> >
> >   /*
> >* Aggressive VACUUM caller can't skip pages just because 
> > they are
> >* all-visible.  They may still skip all-frozen pages, which 
> > can't
> >* contain XIDs < OldestXmin (XIDs that aren't already frozen 
> > by now).
> >*/
> >   if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)
> >   {
> >   if (vacrel->aggressive)
> >   break;
> >
> >   /*
> >* All-visible block is safe to skip in 
> > non-aggressive case.  But
> >* remember that the final range contains such a 
> > block for later.
> >*/
> >   skipsallvis = true;
> >   }
> >
> >   /* XXX: is it OK to remove this? */
> >   vacuum_delay_point();
> >   next_unskippable_block++;
> >   nskippable_blocks++;
> >   }
>
> Firstly, it seems silly to check DISABLE_PAGE_SKIPPING within the loop.
> When DISABLE_PAGE_SKIPPING is set, we always return the next block and
> set *next_unskippable_allvis = false regardless of the visibility map,
> so why bother checking the visibility map at all?
>
> Except at the very last block of the relation! If you look carefully,
> at the last block we do return *next_unskippable_allvis = true, if the
> VM says so, even if DISABLE_PAGE_SKIPPING is set. I think that's wrong.
> Surely the intention was to pretend that none of the VM bits were set if
> DISABLE_PAGE_SKIPPING is used, also for the last block.

I agree that having next_unskippable_allvis and, as a consequence,
all_visible_according_to_vm set to true for the last block seems
wrong. And It makes sense from a loop efficiency standpoint also to
move it up to the top. However, making that change would have us end
up dirtying all pages in your example.

> This was changed in commit 980ae17310:
>
> > @@ -1311,7 +1327,11 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, 
> > BlockNumber next_block,
> >
> > /* DISABLE_PAGE_SKIPP

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 8, 2024 at 10:41 AM Peter Geoghegan  wrote:
>
> On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas  wrote:
> > ISTM we should revert the above hunk, and backpatch it to v16. I'm a
> > little wary because I don't understand why that change was made in the
> > first place, though. I think it was just an ill-advised attempt at
> > tidying up the code as part of the larger commit, but I'm not sure.
> > Peter, do you remember?
>
> I think that it makes sense to set the VM when indicated by
> lazy_scan_prune, independent of what either the visibility map or the
> page's PD_ALL_VISIBLE marking say. The whole point of
> DISABLE_PAGE_SKIPPING is to deal with VM corruption, after all.

Not that it will be fun to maintain another special case in the VM
update code in lazy_scan_prune(), but we could have a special case
that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if
all_visible_according_to_vm is true and all_visible is true, we update
the VM but don't dirty the page. The docs on DISABLE_PAGE_SKIPPING say
it is meant to deal with VM corruption -- it doesn't say anything
about dealing with incorrectly set PD_ALL_VISIBLE markings.

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Peter Geoghegan
On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman
 wrote:
> Not that it will be fun to maintain another special case in the VM
> update code in lazy_scan_prune(), but we could have a special case
> that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if
> all_visible_according_to_vm is true and all_visible is true, we update
> the VM but don't dirty the page.

It wouldn't necessarily have to be a special case, I think.

We already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in
the block where lazy_scan_prune marks a previously all-visible page
all-frozen -- we don't want to dirty the page unnecessarily there.
Making it conditional is defensive in that particular block (this was
also added by this same commit of mine), and avoids dirtying the page.

Seems like it might be possible to simplify/consolidate the VM-setting
code that's now located at the end of lazy_scan_prune. Perhaps the two
distinct blocks that call visibilitymap_set() could be combined into
one.

-- 
Peter Geoghegan




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Peter Geoghegan
On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan  wrote:
> Seems like it might be possible to simplify/consolidate the VM-setting
> code that's now located at the end of lazy_scan_prune. Perhaps the two
> distinct blocks that call visibilitymap_set() could be combined into
> one.

FWIW I think that my error here might have had something to do with
hallucinating that the code already did things that way.

At the time this went in, I was working on a patchset that did things
this way (more or less). It broke the dependency on
all_visible_according_to_vm entirely, which simplified the
set-and-check-VM code that's now at the end of lazy_scan_prune.

Not sure how practical it'd be to do something like that now (not
offhand), but something to consider.

-- 
Peter Geoghegan




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Heikki Linnakangas

On 08/03/2024 02:46, Melanie Plageman wrote:

On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:

On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:

I will say that now all of the variable names are *very* long. I didn't
want to remove the "state" from LVRelState->next_block_state. (In fact, I
kind of miss the "get". But I had to draw the line somewhere.) I think
without "state" in the name, next_block sounds too much like a function.

Any ideas for shortening the names of next_block_state and its members
or are you fine with them?


Hmm, we can remove the inner struct and add the fields directly into 
LVRelState. LVRelState already contains many groups of variables, like 
"Error reporting state", with no inner structs. I did it that way in the 
attached patch. I also used local variables more.



I was wondering if we should remove the "get" and just go with
heap_vac_scan_next_block(). I didn't do that originally because I didn't
want to imply that the next block was literally the sequentially next
block, but I think maybe I was overthinking it.

Another idea is to call it heap_scan_vac_next_block() and then the order
of the words is more like the table AM functions that get the next block
(e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to
be too similar to those since this isn't a table AM callback.


I've done a version of this.


+1


However, by adding a vmbuffer to next_block_state, the callback may be
able to avoid extra VM fetches from one invocation to the next.


That's a good idea, holding separate VM buffer pins for the 
next-unskippable block and the block we're processing. I adopted that 
approach.


My compiler caught one small bug when I was playing with various 
refactorings of this: heap_vac_scan_next_block() must set *blkno to 
rel_pages, not InvalidBlockNumber, after the last block. The caller uses 
the 'blkno' variable also after the loop, and assumes that it's set to 
rel_pages.


I'm pretty happy with the attached patches now. The first one fixes the 
existing bug I mentioned in the other email (based on the on-going 
discussion that might not how we want to fix it though). Second commit 
is a squash of most of the patches. Third patch is the removal of the 
delay point, that seems worthwhile to keep separate.


--
Heikki Linnakangas
Neon (https://neon.tech)
From b68cb29c547de3c4acd10f31aad47b453d154666 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas 
Date: Fri, 8 Mar 2024 16:00:22 +0200
Subject: [PATCH v8 1/3] Set all_visible_according_to_vm correctly with
 DISABLE_PAGE_SKIPPING

It's important for 'all_visible_according_to_vm' to correctly reflect
whether the VM bit is set or not, even when we are not trusting the VM
to skip pages, because contrary to what the comment said,
lazy_scan_prune() relies on it.

If it's incorrectly set to 'false', when the VM bit is in fact set,
lazy_scan_prune() will try to set the VM bit again and dirty the page
unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all
heap pages were dirtied, even if there were no changes. We would also
fail to clear any VM bits that were set incorrectly.

This was broken in commit 980ae17310, so backpatch to v16.

Backpatch-through: 16
Reviewed-by: Melanie Plageman
Discussion: https://www.postgresql.org/message-id/3df2b582-dc1c-46b6-99b6-38eddd1b2...@iki.fi
---
 src/backend/access/heap/vacuumlazy.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 8b320c3f89a..ac55ebd2ae5 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1136,11 +1136,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
 
 		/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */
 		if (!vacrel->skipwithvm)
-		{
-			/* Caller shouldn't rely on all_visible_according_to_vm */
-			*next_unskippable_allvis = false;
 			break;
-		}
 
 		/*
 		 * Aggressive VACUUM caller can't skip pages just because they are
-- 
2.39.2

From 47af1ca65cf55ca876869b43bff47f9d43f0750e Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas 
Date: Fri, 8 Mar 2024 17:32:19 +0200
Subject: [PATCH v8 2/3] Confine vacuum skip logic to lazy_scan_skip()

Rename lazy_scan_skip() to heap_vac_scan_next_block() and move more
code into the function, so that the caller doesn't need to know about
ranges or skipping anymore. heap_vac_scan_next_block() returns the
next block to process, and the logic for determining that block is all
within the function. This makes the skipping logic easier to
understand, as it's all in the same function, and makes the calling
code easier to understand as it's less cluttered. The state variables
needed to manage the skipping logic are moved to LVRelState.

heap_vac_scan_next_block() now manages its own VM buffer separately
from the caller's vmbuffer variable. The caller's vmbuffer holds the
VM page for the current block its processing, whil

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan  wrote:
>
> On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman
>  wrote:
> > Not that it will be fun to maintain another special case in the VM
> > update code in lazy_scan_prune(), but we could have a special case
> > that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if
> > all_visible_according_to_vm is true and all_visible is true, we update
> > the VM but don't dirty the page.
>
> It wouldn't necessarily have to be a special case, I think.
>
> We already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in
> the block where lazy_scan_prune marks a previously all-visible page
> all-frozen -- we don't want to dirty the page unnecessarily there.
> Making it conditional is defensive in that particular block (this was
> also added by this same commit of mine), and avoids dirtying the page.

Ah, I see. I got confused. Even if the VM is suspect, if the page is
all visible and the heap block is already set all-visible in the VM,
there is no need to update it.

This did make me realize that it seems like there is a case we don't
handle in master with the current code that would be fixed by changing
that code Heikki mentioned:

Right now, even if the heap block is incorrectly marked all-visible in
the VM, if DISABLE_PAGE_SKIPPING is passed to vacuum,
all_visible_according_to_vm will be passed to lazy_scan_prune() as
false. Then even if lazy_scan_prune() finds that the page is not
all-visible, we won't call visibilitymap_clear().

If we revert the code setting next_unskippable_allvis to false in
lazy_scan_skip() when vacrel->skipwithvm is false and allow
all_visible_according_to_vm to be true when the VM has it incorrectly
set to true, then once lazy_scan_prune() discovers the page is not
all-visible and assuming PD_ALL_VISIBLE is not marked so
PageIsAllVisible() returns false, we will call visibilitymap_clear()
to clear the incorrectly set VM bit (without dirtying the page).

Here is a table of the variable states at the end of lazy_scan_prune()
for clarity:

master:
all_visible_according_to_vm: false
all_visible: false
VM says all vis: true
PageIsAllVisible:false

if fixed:
all_visible_according_to_vm: true
all_visible: false
VM says all vis: true
PageIsAllVisible:false

> Seems like it might be possible to simplify/consolidate the VM-setting
> code that's now located at the end of lazy_scan_prune. Perhaps the two
> distinct blocks that call visibilitymap_set() could be combined into
> one.

I agree. I have some code to do that in an unproposed patch which
combines the VM updates into the prune record. We will definitely want
to reorganize the code when we do that record combining.

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 8, 2024 at 11:31 AM Melanie Plageman
 wrote:
>
> On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan  wrote:
> >
> > On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman
> >  wrote:
> > > Not that it will be fun to maintain another special case in the VM
> > > update code in lazy_scan_prune(), but we could have a special case
> > > that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if
> > > all_visible_according_to_vm is true and all_visible is true, we update
> > > the VM but don't dirty the page.
> >
> > It wouldn't necessarily have to be a special case, I think.
> >
> > We already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in
> > the block where lazy_scan_prune marks a previously all-visible page
> > all-frozen -- we don't want to dirty the page unnecessarily there.
> > Making it conditional is defensive in that particular block (this was
> > also added by this same commit of mine), and avoids dirtying the page.
>
> Ah, I see. I got confused. Even if the VM is suspect, if the page is
> all visible and the heap block is already set all-visible in the VM,
> there is no need to update it.
>
> This did make me realize that it seems like there is a case we don't
> handle in master with the current code that would be fixed by changing
> that code Heikki mentioned:
>
> Right now, even if the heap block is incorrectly marked all-visible in
> the VM, if DISABLE_PAGE_SKIPPING is passed to vacuum,
> all_visible_according_to_vm will be passed to lazy_scan_prune() as
> false. Then even if lazy_scan_prune() finds that the page is not
> all-visible, we won't call visibilitymap_clear().
>
> If we revert the code setting next_unskippable_allvis to false in
> lazy_scan_skip() when vacrel->skipwithvm is false and allow
> all_visible_according_to_vm to be true when the VM has it incorrectly
> set to true, then once lazy_scan_prune() discovers the page is not
> all-visible and assuming PD_ALL_VISIBLE is not marked so
> PageIsAllVisible() returns false, we will call visibilitymap_clear()
> to clear the incorrectly set VM bit (without dirtying the page).
>
> Here is a table of the variable states at the end of lazy_scan_prune()
> for clarity:
>
> master:
> all_visible_according_to_vm: false
> all_visible: false
> VM says all vis: true
> PageIsAllVisible:false
>
> if fixed:
> all_visible_according_to_vm: true
> all_visible: false
> VM says all vis: true
> PageIsAllVisible:false

Okay, I now see from Heikki's v8-0001 that he was already aware of this.

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:
> On 08/03/2024 02:46, Melanie Plageman wrote:
> > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:
> > > On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:
> > I will say that now all of the variable names are *very* long. I didn't
> > want to remove the "state" from LVRelState->next_block_state. (In fact, I
> > kind of miss the "get". But I had to draw the line somewhere.) I think
> > without "state" in the name, next_block sounds too much like a function.
> > 
> > Any ideas for shortening the names of next_block_state and its members
> > or are you fine with them?
> 
> Hmm, we can remove the inner struct and add the fields directly into
> LVRelState. LVRelState already contains many groups of variables, like
> "Error reporting state", with no inner structs. I did it that way in the
> attached patch. I also used local variables more.

+1; I like the result of this.

> > However, by adding a vmbuffer to next_block_state, the callback may be
> > able to avoid extra VM fetches from one invocation to the next.
> 
> That's a good idea, holding separate VM buffer pins for the next-unskippable
> block and the block we're processing. I adopted that approach.

Cool. It can't be avoided with streaming read vacuum, but I wonder if
there would ever be adverse effects to doing it on master? Maybe if we
are doing a lot of skipping and the block of the VM for the heap blocks
we are processing ends up changing each time but we would have had the
right block of the VM if we used the one from
heap_vac_scan_next_block()?

Frankly, I'm in favor of just doing it now because it makes
lazy_scan_heap() less confusing.

> My compiler caught one small bug when I was playing with various
> refactorings of this: heap_vac_scan_next_block() must set *blkno to
> rel_pages, not InvalidBlockNumber, after the last block. The caller uses the
> 'blkno' variable also after the loop, and assumes that it's set to
> rel_pages.

Oops! Thanks for catching that.

> I'm pretty happy with the attached patches now. The first one fixes the
> existing bug I mentioned in the other email (based on the on-going
> discussion that might not how we want to fix it though).

ISTM we should still do the fix you mentioned -- seems like it has more
upsides than downsides?

> From b68cb29c547de3c4acd10f31aad47b453d154666 Mon Sep 17 00:00:00 2001
> From: Heikki Linnakangas 
> Date: Fri, 8 Mar 2024 16:00:22 +0200
> Subject: [PATCH v8 1/3] Set all_visible_according_to_vm correctly with
>  DISABLE_PAGE_SKIPPING
> 
> It's important for 'all_visible_according_to_vm' to correctly reflect
> whether the VM bit is set or not, even when we are not trusting the VM
> to skip pages, because contrary to what the comment said,
> lazy_scan_prune() relies on it.
> 
> If it's incorrectly set to 'false', when the VM bit is in fact set,
> lazy_scan_prune() will try to set the VM bit again and dirty the page
> unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all
> heap pages were dirtied, even if there were no changes. We would also
> fail to clear any VM bits that were set incorrectly.
> 
> This was broken in commit 980ae17310, so backpatch to v16.

LGTM.

> From 47af1ca65cf55ca876869b43bff47f9d43f0750e Mon Sep 17 00:00:00 2001
> From: Heikki Linnakangas 
> Date: Fri, 8 Mar 2024 17:32:19 +0200
> Subject: [PATCH v8 2/3] Confine vacuum skip logic to lazy_scan_skip()
> ---
>  src/backend/access/heap/vacuumlazy.c | 256 +++
>  1 file changed, 141 insertions(+), 115 deletions(-)
> 
> diff --git a/src/backend/access/heap/vacuumlazy.c 
> b/src/backend/access/heap/vacuumlazy.c
> index ac55ebd2ae5..0aa08762015 100644
> --- a/src/backend/access/heap/vacuumlazy.c
> +++ b/src/backend/access/heap/vacuumlazy.c
> @@ -204,6 +204,12 @@ typedef struct LVRelState
>   int64   live_tuples;/* # live tuples remaining */
>   int64   recently_dead_tuples;   /* # dead, but not yet 
> removable */
>   int64   missed_dead_tuples; /* # removable, but not removed */

Perhaps we should add a comment to the blkno member of LVRelState
indicating that it is used for error reporting and logging?

> + /* State maintained by heap_vac_scan_next_block() */
> + BlockNumber current_block;  /* last block returned */
> + BlockNumber next_unskippable_block; /* next unskippable block */
> + boolnext_unskippable_allvis;/* its visibility 
> status */
> + Buffer  next_unskippable_vmbuffer;  /* buffer containing 
> its VM bit */
>  } LVRelState;

>  /*
> -static BlockNumber
> -lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
> -bool *next_unskippable_allvis, bool 
> *skipping_current_range)
> +static bool
> +heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
> +  bool 
> *all_visible_according

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-08 Thread Melanie Plageman
On Fri, Mar 8, 2024 at 12:34 PM Melanie Plageman
 wrote:
>
> On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:
> > On 08/03/2024 02:46, Melanie Plageman wrote:
> > > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:
> > > > On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:
> > > I will say that now all of the variable names are *very* long. I didn't
> > > want to remove the "state" from LVRelState->next_block_state. (In fact, I
> > > kind of miss the "get". But I had to draw the line somewhere.) I think
> > > without "state" in the name, next_block sounds too much like a function.
> > >
> > > Any ideas for shortening the names of next_block_state and its members
> > > or are you fine with them?
> >
> > Hmm, we can remove the inner struct and add the fields directly into
> > LVRelState. LVRelState already contains many groups of variables, like
> > "Error reporting state", with no inner structs. I did it that way in the
> > attached patch. I also used local variables more.
>
> +1; I like the result of this.

I did some perf testing of 0002 and 0003 using that fully-in-SB vacuum
test I mentioned in an earlier email. 0002 is a vacuum time reduction
from an average of 11.5 ms on master to 9.6 ms with 0002 applied. And
0003 reduces the time vacuum takes from 11.5 ms on master to 7.4 ms
with 0003 applied.

I profiled them and 0002 seems to simply spend less time in
heap_vac_scan_next_block() than master did in lazy_scan_skip().

And 0003 reduces the time vacuum takes because vacuum_delay_point()
shows up pretty high in the profile.

Here are the profiles for my test.

profile of master:

+   29.79%  postgres  postgres   [.] visibilitymap_get_status
+   27.35%  postgres  postgres   [.] vacuum_delay_point
+   17.00%  postgres  postgres   [.] lazy_scan_skip
+6.59%  postgres  postgres   [.] heap_vacuum_rel
+6.43%  postgres  postgres   [.] BufferGetBlockNumber

profile with 0001-0002:

+   40.30%  postgres  postgres   [.] visibilitymap_get_status
+   20.32%  postgres  postgres   [.] vacuum_delay_point
+   20.26%  postgres  postgres   [.] heap_vacuum_rel
+5.17%  postgres  postgres   [.] BufferGetBlockNumber

profile with 0001-0003

+   59.77%  postgres  postgres   [.] visibilitymap_get_status
+   23.86%  postgres  postgres   [.] heap_vacuum_rel
+6.59%  postgres  postgres   [.] StrategyGetBuffer

Test DDL and setup:

psql -c "ALTER SYSTEM SET shared_buffers = '16 GB';"
psql -c "CREATE TABLE foo(id INT, a INT, b INT, c INT, d INT, e INT, f
INT, g INT) with (autovacuum_enabled=false, fillfactor=25);"
psql -c "INSERT INTO foo SELECT i, i, i, i, i, i, i, i FROM
generate_series(1, 4600)i;"
psql -c "VACUUM (FREEZE) foo;"
pg_ctl restart
psql -c "SELECT pg_prewarm('foo');"
# make sure there isn't an ill-timed checkpoint
psql -c "\timing on" -c "vacuum (verbose) foo;"

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-10 Thread Melanie Plageman
On Wed, Mar 6, 2024 at 6:47 PM Melanie Plageman
 wrote:
>
> Performance results:
>
> The TL;DR of my performance results is that streaming read vacuum is
> faster. However there is an issue with the interaction of the streaming
> read code and the vacuum buffer access strategy which must be addressed.

I have investigated the interaction between
maintenance_io_concurrency, streaming reads, and the vacuum buffer
access strategy (BAS_VACUUM).

The streaming read API limits max_pinned_buffers to a pinned buffer
multiplier (currently 4) * maintenance_io_concurrency buffers with the
goal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.

Since the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with
default block size, that means that for a fully uncached vacuum in
which all blocks must be vacuumed and will be dirtied, you'd have to
set maintenance_io_concurrency at 8 or lower to see the same number of
reuses (and shared buffer consumption) as master.

Given that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it
seems like we should force max_pinned_buffers to a value that
guarantees the expected shared buffer usage by vacuum. But that means
that maintenance_io_concurrency does not have a predictable impact on
streaming read vacuum.

What is the right thing to do here?

At the least, the default size of the BAS_VACUUM ring buffer should be
BLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency
(probably rounded up to the next power of two) bytes.

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-10 Thread Thomas Munro
On Mon, Mar 11, 2024 at 5:31 AM Melanie Plageman
 wrote:
> On Wed, Mar 6, 2024 at 6:47 PM Melanie Plageman
>  wrote:
> > Performance results:
> >
> > The TL;DR of my performance results is that streaming read vacuum is
> > faster. However there is an issue with the interaction of the streaming
> > read code and the vacuum buffer access strategy which must be addressed.

Woo.

> I have investigated the interaction between
> maintenance_io_concurrency, streaming reads, and the vacuum buffer
> access strategy (BAS_VACUUM).
>
> The streaming read API limits max_pinned_buffers to a pinned buffer
> multiplier (currently 4) * maintenance_io_concurrency buffers with the
> goal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.
>
> Since the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with
> default block size, that means that for a fully uncached vacuum in
> which all blocks must be vacuumed and will be dirtied, you'd have to
> set maintenance_io_concurrency at 8 or lower to see the same number of
> reuses (and shared buffer consumption) as master.
>
> Given that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it
> seems like we should force max_pinned_buffers to a value that
> guarantees the expected shared buffer usage by vacuum. But that means
> that maintenance_io_concurrency does not have a predictable impact on
> streaming read vacuum.
>
> What is the right thing to do here?
>
> At the least, the default size of the BAS_VACUUM ring buffer should be
> BLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency
> (probably rounded up to the next power of two) bytes.

Hmm, does the v6 look-ahead distance control algorithm mitigate that
problem?  Using the ABC classifications from the streaming read
thread, I think for A it should now pin only 1, for B 16 and for C, it
depends on the size of the random 'chunks': if you have a lot of size
1 random reads then it shouldn't go above 10 because of (default)
maintenance_io_concurrency.  The only way to get up to very high
numbers would be to have a lot of random chunks triggering behaviour
C, but each made up of long runs of misses.  For example one can
contrive a BHS query that happens to read pages 0-15 then 20-35 then
40-55 etc etc so that we want to get lots of wide I/Os running
concurrently.  Unless vacuum manages to do something like that, it
shouldn't be able to exceed 32 buffers very easily.

I suspect that if we taught streaming_read.c to ask the
BufferAccessStrategy (if one is passed in) what its recommended pin
limit is (strategy->nbuffers?), we could just clamp
max_pinned_buffers, and it would be hard to find a workload where that
makes a difference, and we could think about more complicated logic
later.

In other words, I think/hope your complaints about excessive pinning
from v5 WRT all-cached heap scans might have also already improved
this case by happy coincidence?  I haven't tried it out though, I just
read your description of the problem...




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-11 Thread Heikki Linnakangas

On 08/03/2024 19:34, Melanie Plageman wrote:

On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:

On 08/03/2024 02:46, Melanie Plageman wrote:

On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:

On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:

However, by adding a vmbuffer to next_block_state, the callback may be
able to avoid extra VM fetches from one invocation to the next.


That's a good idea, holding separate VM buffer pins for the next-unskippable
block and the block we're processing. I adopted that approach.


Cool. It can't be avoided with streaming read vacuum, but I wonder if
there would ever be adverse effects to doing it on master? Maybe if we
are doing a lot of skipping and the block of the VM for the heap blocks
we are processing ends up changing each time but we would have had the
right block of the VM if we used the one from
heap_vac_scan_next_block()?

Frankly, I'm in favor of just doing it now because it makes
lazy_scan_heap() less confusing.


+1


I'm pretty happy with the attached patches now. The first one fixes the
existing bug I mentioned in the other email (based on the on-going
discussion that might not how we want to fix it though).


ISTM we should still do the fix you mentioned -- seems like it has more
upsides than downsides?


 From b68cb29c547de3c4acd10f31aad47b453d154666 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas 
Date: Fri, 8 Mar 2024 16:00:22 +0200
Subject: [PATCH v8 1/3] Set all_visible_according_to_vm correctly with
  DISABLE_PAGE_SKIPPING

It's important for 'all_visible_according_to_vm' to correctly reflect
whether the VM bit is set or not, even when we are not trusting the VM
to skip pages, because contrary to what the comment said,
lazy_scan_prune() relies on it.

If it's incorrectly set to 'false', when the VM bit is in fact set,
lazy_scan_prune() will try to set the VM bit again and dirty the page
unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all
heap pages were dirtied, even if there were no changes. We would also
fail to clear any VM bits that were set incorrectly.

This was broken in commit 980ae17310, so backpatch to v16.


LGTM.


Committed and backpatched this.


 From 47af1ca65cf55ca876869b43bff47f9d43f0750e Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas 
Date: Fri, 8 Mar 2024 17:32:19 +0200
Subject: [PATCH v8 2/3] Confine vacuum skip logic to lazy_scan_skip()
---
  src/backend/access/heap/vacuumlazy.c | 256 +++
  1 file changed, 141 insertions(+), 115 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c 
b/src/backend/access/heap/vacuumlazy.c
index ac55ebd2ae5..0aa08762015 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -204,6 +204,12 @@ typedef struct LVRelState
int64   live_tuples;/* # live tuples remaining */
int64   recently_dead_tuples;   /* # dead, but not yet 
removable */
int64   missed_dead_tuples; /* # removable, but not removed */


Perhaps we should add a comment to the blkno member of LVRelState
indicating that it is used for error reporting and logging?


Well, it's already under the "/* Error reporting state */" section. I 
agree this is a little confusing, the name 'blkno' doesn't convey that 
it's supposed to be used just for error reporting. But it's a 
pre-existing issue so I left it alone. It can be changed with a separate 
patch if we come up with a good idea.



I see why you removed my treatise-level comment that was here about
unskipped skippable blocks. However, when I was trying to understand
this code, I did wish there was some comment that explained to me why we
needed all of the variables next_unskippable_block,
next_unskippable_allvis, all_visible_according_to_vm, and current_block.

The idea that we would choose not to skip a skippable block because of
kernel readahead makes sense. The part that I had trouble wrapping my
head around was that we want to also keep the visibility status of both
the beginning and ending blocks of the skippable range and then use
those to infer the visibility status of the intervening blocks without
another VM lookup if we decide not to skip them.


Right, I removed the comment because looked a little out of place and it 
duplicated the other comments sprinkled in the function. I agree this 
could still use some more comments though.


Here's yet another attempt at making this more readable. I moved the 
logic to find the next unskippable block to a separate function, and 
added comments to make the states more explicit. What do you think?


--
Heikki Linnakangas
Neon (https://neon.tech)
From c21480e9da61e145573de3b502551dde1b8fa3f6 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas 
Date: Fri, 8 Mar 2024 17:32:19 +0200
Subject: [PATCH v9 1/2] Confine vacuum skip logic to lazy_scan_skip()

Rename lazy_scan_skip() to heap_vac_scan_next_block() and move more
code into the function, so tha

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-11 Thread Melanie Plageman
On Mon, Mar 11, 2024 at 11:29:44AM +0200, Heikki Linnakangas wrote:
>
> > I see why you removed my treatise-level comment that was here about
> > unskipped skippable blocks. However, when I was trying to understand
> > this code, I did wish there was some comment that explained to me why we
> > needed all of the variables next_unskippable_block,
> > next_unskippable_allvis, all_visible_according_to_vm, and current_block.
> > 
> > The idea that we would choose not to skip a skippable block because of
> > kernel readahead makes sense. The part that I had trouble wrapping my
> > head around was that we want to also keep the visibility status of both
> > the beginning and ending blocks of the skippable range and then use
> > those to infer the visibility status of the intervening blocks without
> > another VM lookup if we decide not to skip them.
> 
> Right, I removed the comment because looked a little out of place and it
> duplicated the other comments sprinkled in the function. I agree this could
> still use some more comments though.
> 
> Here's yet another attempt at making this more readable. I moved the logic
> to find the next unskippable block to a separate function, and added
> comments to make the states more explicit. What do you think?

Oh, I like the new structure. Very cool! Just a few remarks:

> From c21480e9da61e145573de3b502551dde1b8fa3f6 Mon Sep 17 00:00:00 2001
> From: Heikki Linnakangas 
> Date: Fri, 8 Mar 2024 17:32:19 +0200
> Subject: [PATCH v9 1/2] Confine vacuum skip logic to lazy_scan_skip()
> 
> Rename lazy_scan_skip() to heap_vac_scan_next_block() and move more
> code into the function, so that the caller doesn't need to know about
> ranges or skipping anymore. heap_vac_scan_next_block() returns the
> next block to process, and the logic for determining that block is all
> within the function. This makes the skipping logic easier to
> understand, as it's all in the same function, and makes the calling
> code easier to understand as it's less cluttered. The state variables
> needed to manage the skipping logic are moved to LVRelState.
> 
> heap_vac_scan_next_block() now manages its own VM buffer separately
> from the caller's vmbuffer variable. The caller's vmbuffer holds the
> VM page for the current block its processing, while
> heap_vac_scan_next_block() keeps a pin on the VM page for the next
> unskippable block. Most of the time they are the same, so we hold two
> pins on the same buffer, but it's more convenient to manage them
> separately.
> 
> For readability inside heap_vac_scan_next_block(), move the logic of
> finding the next unskippable block to separate function, and add some
> comments.
> 
> This refactoring will also help future patches to switch to using a
> streaming read interface, and eventually AIO
> (https://postgr.es/m/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com)
> 
> Author: Melanie Plageman, with some changes by me

I'd argue you earned co-authorship by now :)

> Discussion: 
> https://postgr.es/m/CAAKRu_Yf3gvXGcCnqqfoq0Q8LX8UM-e-qbm_B1LeZh60f8WhWA%40mail.gmail.com
> ---
>  src/backend/access/heap/vacuumlazy.c | 233 +--
>  1 file changed, 146 insertions(+), 87 deletions(-)
> 
> diff --git a/src/backend/access/heap/vacuumlazy.c 
> b/src/backend/access/heap/vacuumlazy.c
> index ac55ebd2ae..1757eb49b7 100644
> --- a/src/backend/access/heap/vacuumlazy.c
> +++ b/src/backend/access/heap/vacuumlazy.c
> +

>  /*
> - *   lazy_scan_skip() -- set up range of skippable blocks using visibility 
> map.
> + *   heap_vac_scan_next_block() -- get next block for vacuum to process
>   *
> - * lazy_scan_heap() calls here every time it needs to set up a new range of
> - * blocks to skip via the visibility map.  Caller passes the next block in
> - * line.  We return a next_unskippable_block for this range.  When there are
> - * no skippable blocks we just return caller's next_block.  The all-visible
> - * status of the returned block is set in *next_unskippable_allvis for 
> caller,
> - * too.  Block usually won't be all-visible (since it's unskippable), but it
> - * can be during aggressive VACUUMs (as well as in certain edge cases).
> + * lazy_scan_heap() calls here every time it needs to get the next block to
> + * prune and vacuum.  The function uses the visibility map, vacuum options,
> + * and various thresholds to skip blocks which do not need to be processed 
> and

I wonder if "need" is too strong a word since this function
(heap_vac_scan_next_block()) specifically can set blkno to a block which
doesn't *need* to be processed but which it chooses to process because
of SKIP_PAGES_THRESHOLD.

> + * sets blkno to the next block that actually needs to be processed.
>   *
> - * Sets *skipping_current_range to indicate if caller should skip this range.
> - * Costs and benefits drive our decision.  Very small ranges won't be 
> skipped.
> + * The block number and visibility status of the next block to process are 
> set

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-11 Thread Heikki Linnakangas

On 11/03/2024 18:15, Melanie Plageman wrote:

On Mon, Mar 11, 2024 at 11:29:44AM +0200, Heikki Linnakangas wrote:

diff --git a/src/backend/access/heap/vacuumlazy.c 
b/src/backend/access/heap/vacuumlazy.c
index ac55ebd2ae..1757eb49b7 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
+



  /*
- * lazy_scan_skip() -- set up range of skippable blocks using visibility 
map.
+ * heap_vac_scan_next_block() -- get next block for vacuum to process
   *
- * lazy_scan_heap() calls here every time it needs to set up a new range of
- * blocks to skip via the visibility map.  Caller passes the next block in
- * line.  We return a next_unskippable_block for this range.  When there are
- * no skippable blocks we just return caller's next_block.  The all-visible
- * status of the returned block is set in *next_unskippable_allvis for caller,
- * too.  Block usually won't be all-visible (since it's unskippable), but it
- * can be during aggressive VACUUMs (as well as in certain edge cases).
+ * lazy_scan_heap() calls here every time it needs to get the next block to
+ * prune and vacuum.  The function uses the visibility map, vacuum options,
+ * and various thresholds to skip blocks which do not need to be processed and
+ * sets blkno to the next block that actually needs to be processed.


I wonder if "need" is too strong a word since this function
(heap_vac_scan_next_block()) specifically can set blkno to a block which
doesn't *need* to be processed but which it chooses to process because
of SKIP_PAGES_THRESHOLD.


Ok yeah, there's a lot of "needs" here :-). Fixed.


   *
- * Sets *skipping_current_range to indicate if caller should skip this range.
- * Costs and benefits drive our decision.  Very small ranges won't be skipped.
+ * The block number and visibility status of the next block to process are set
+ * in *blkno and *all_visible_according_to_vm.  The return value is false if
+ * there are no further blocks to process.
+ *
+ * vacrel is an in/out parameter here; vacuum options and information about
+ * the relation are read, and vacrel->skippedallvis is set to ensure we don't
+ * advance relfrozenxid when we have skipped vacuuming all-visible blocks.  It


Maybe this should say when we have skipped vacuuming all-visible blocks
which are not all-frozen or just blocks which are not all-frozen.


Ok, rephrased.


+ * also holds information about the next unskippable block, as bookkeeping for
+ * this function.
   *
   * Note: our opinion of which blocks can be skipped can go stale immediately.
   * It's okay if caller "misses" a page whose all-visible or all-frozen marking


Wonder if it makes sense to move this note to
find_next_nunskippable_block().


Moved.


@@ -1098,26 +1081,119 @@ lazy_scan_heap(LVRelState *vacrel)
   * older XIDs/MXIDs.  The vacrel->skippedallvis flag will be set here when the
   * choice to skip such a range is actually made, making everything safe.)
   */
-static BlockNumber
-lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
-  bool *next_unskippable_allvis, bool 
*skipping_current_range)
+static bool
+heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
+bool 
*all_visible_according_to_vm)
  {
-   BlockNumber rel_pages = vacrel->rel_pages,
-   next_unskippable_block = next_block,
-   nskippable_blocks = 0;
-   boolskipsallvis = false;
+   BlockNumber next_block;
  
-	*next_unskippable_allvis = true;

-   while (next_unskippable_block < rel_pages)
+   /* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */
+   next_block = vacrel->current_block + 1;
+
+   /* Have we reached the end of the relation? */


No strong opinion on this, but I wonder if being at the end of the
relation counts as a fourth state?


Yeah, perhaps. But I think it makes sense to treat it as a special case.


+   if (next_block >= vacrel->rel_pages)
+   {
+   if (BufferIsValid(vacrel->next_unskippable_vmbuffer))
+   {
+   ReleaseBuffer(vacrel->next_unskippable_vmbuffer);
+   vacrel->next_unskippable_vmbuffer = InvalidBuffer;
+   }
+   *blkno = vacrel->rel_pages;
+   return false;
+   }
+
+   /*
+* We must be in one of the three following states:
+*/
+   if (vacrel->next_unskippable_block == InvalidBlockNumber ||
+   next_block > vacrel->next_unskippable_block)
+   {
+   /*
+* 1. We have just processed an unskippable block (or we're at 
the
+* beginning of the scan).  Find the next unskippable block 
using the
+* visibility map.
+*/


I would reorder the options in the comment or in the if statement since
they seem to be in the rev

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-11 Thread Melanie Plageman
On Mon, Mar 11, 2024 at 2:47 PM Heikki Linnakangas  wrote:
>
>
> > Otherwise LGTM
>
> Ok, pushed! Thank you, this is much more understandable now!

Cool, thanks!




Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-11 Thread Melanie Plageman
On Sun, Mar 10, 2024 at 11:01 PM Thomas Munro  wrote:
>
> On Mon, Mar 11, 2024 at 5:31 AM Melanie Plageman
>  wrote:
> > I have investigated the interaction between
> > maintenance_io_concurrency, streaming reads, and the vacuum buffer
> > access strategy (BAS_VACUUM).
> >
> > The streaming read API limits max_pinned_buffers to a pinned buffer
> > multiplier (currently 4) * maintenance_io_concurrency buffers with the
> > goal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.
> >
> > Since the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with
> > default block size, that means that for a fully uncached vacuum in
> > which all blocks must be vacuumed and will be dirtied, you'd have to
> > set maintenance_io_concurrency at 8 or lower to see the same number of
> > reuses (and shared buffer consumption) as master.
> >
> > Given that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it
> > seems like we should force max_pinned_buffers to a value that
> > guarantees the expected shared buffer usage by vacuum. But that means
> > that maintenance_io_concurrency does not have a predictable impact on
> > streaming read vacuum.
> >
> > What is the right thing to do here?
> >
> > At the least, the default size of the BAS_VACUUM ring buffer should be
> > BLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency
> > (probably rounded up to the next power of two) bytes.
>
> Hmm, does the v6 look-ahead distance control algorithm mitigate that
> problem?  Using the ABC classifications from the streaming read
> thread, I think for A it should now pin only 1, for B 16 and for C, it
> depends on the size of the random 'chunks': if you have a lot of size
> 1 random reads then it shouldn't go above 10 because of (default)
> maintenance_io_concurrency.  The only way to get up to very high
> numbers would be to have a lot of random chunks triggering behaviour
> C, but each made up of long runs of misses.  For example one can
> contrive a BHS query that happens to read pages 0-15 then 20-35 then
> 40-55 etc etc so that we want to get lots of wide I/Os running
> concurrently.  Unless vacuum manages to do something like that, it
> shouldn't be able to exceed 32 buffers very easily.
>
> I suspect that if we taught streaming_read.c to ask the
> BufferAccessStrategy (if one is passed in) what its recommended pin
> limit is (strategy->nbuffers?), we could just clamp
> max_pinned_buffers, and it would be hard to find a workload where that
> makes a difference, and we could think about more complicated logic
> later.
>
> In other words, I think/hope your complaints about excessive pinning
> from v5 WRT all-cached heap scans might have also already improved
> this case by happy coincidence?  I haven't tried it out though, I just
> read your description of the problem...

I've rebased the attached v10 over top of the changes to
lazy_scan_heap() Heikki just committed and over the v6 streaming read
patch set. I started testing them and see that you are right, we no
longer pin too many buffers. However, the uncached example below is
now slower with streaming read than on master -- it looks to be
because it is doing twice as many WAL writes and syncs. I'm still
investigating why that is.

psql \
-c "create table small (a int) with (autovacuum_enabled=false,
fillfactor=25);" \
-c "insert into small select generate_series(1,20) % 3;" \
-c "update small set a = 6 where a = 1;"

pg_ctl stop
# drop caches
pg_ctl start

psql -c "\timing on" -c "vacuum (verbose) small"

- Melanie
From 2b181d178f0cd55de45659540ec35536918a4c9d Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Mon, 11 Mar 2024 15:36:39 -0400
Subject: [PATCH v10 1/3] Streaming Read API

---
 src/backend/storage/Makefile |   2 +-
 src/backend/storage/aio/Makefile |  14 +
 src/backend/storage/aio/meson.build  |   5 +
 src/backend/storage/aio/streaming_read.c | 648 +++
 src/backend/storage/buffer/bufmgr.c  | 641 +++---
 src/backend/storage/buffer/localbuf.c|  14 +-
 src/backend/storage/meson.build  |   1 +
 src/include/storage/bufmgr.h |  45 ++
 src/include/storage/streaming_read.h |  52 ++
 src/tools/pgindent/typedefs.list |   3 +
 10 files changed, 1215 insertions(+), 210 deletions(-)
 create mode 100644 src/backend/storage/aio/Makefile
 create mode 100644 src/backend/storage/aio/meson.build
 create mode 100644 src/backend/storage/aio/streaming_read.c
 create mode 100644 src/include/storage/streaming_read.h

diff --git a/src/backend/storage/Makefile b/src/backend/storage/Makefile
index 8376cdfca20..eec03f6f2b4 100644
--- a/src/backend/storage/Makefile
+++ b/src/backend/storage/Makefile
@@ -8,6 +8,6 @@ subdir = src/backend/storage
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-SUBDIRS = buffer file freespace ipc large_object lmgr page smgr sync
+SUBDIRS = aio buffer file freespace ipc large_object lmgr page smg

Re: Confine vacuum skip logic to lazy_scan_skip

2024-03-16 Thread Thomas Munro
On Tue, Mar 12, 2024 at 10:03 AM Melanie Plageman
 wrote:
> I've rebased the attached v10 over top of the changes to
> lazy_scan_heap() Heikki just committed and over the v6 streaming read
> patch set. I started testing them and see that you are right, we no
> longer pin too many buffers. However, the uncached example below is
> now slower with streaming read than on master -- it looks to be
> because it is doing twice as many WAL writes and syncs. I'm still
> investigating why that is.

That makes sense to me.  We have 256kB of buffers in our ring, but now
we're trying to read ahead 128kB at a time, so it works out that we
can only flush the WAL accumulated while dirtying half the blocks at a
time, so we flush twice as often.

If I change the ring size to 384kB, allowing for that read-ahead
window, I see approximately the same WAL flushes.  Surely we'd never
be able to get the behaviour to match *and* keep the same ring size?
We simply need those 16 extra buffers to have a chance of accumulating
32 dirty buffers, and the associated WAL.  Do you see the same result,
or do you think something more than that is wrong here?

Here are some system call traces using your test that helped me see
the behaviour:

1. Unpatched, ie no streaming read, we flush 90kB of WAL generated by
32 pages before we write them out one at a time just before we read in
their replacements.  One flush covers the LSNs of all the pages that
will be written, even though it's only called for the first page to be
written.  That's because XLogFlush(lsn), if it decides to do anything,
flushes as far as it can... IOW when we hit the *oldest* dirty block,
that's when we write out the WAL up to where we dirtied the *newest*
block, which covers the 32 pwrite() calls here:

pwrite(30,...,90112,0xf9) = 90112 (0x16000)
fdatasync(30) = 0 (0x0)
pwrite(27,...,8192,0x0) = 8192 (0x2000)
pread(27,...,8192,0x4) = 8192 (0x2000)
pwrite(27,...,8192,0x2000) = 8192 (0x2000)
pread(27,...,8192,0x42000) = 8192 (0x2000)
pwrite(27,...,8192,0x4000) = 8192 (0x2000)
pread(27,...,8192,0x44000) = 8192 (0x2000)
pwrite(27,...,8192,0x6000) = 8192 (0x2000)
pread(27,...,8192,0x46000) = 8192 (0x2000)
pwrite(27,...,8192,0x8000) = 8192 (0x2000)
pread(27,...,8192,0x48000) = 8192 (0x2000)
pwrite(27,...,8192,0xa000) = 8192 (0x2000)
pread(27,...,8192,0x4a000) = 8192 (0x2000)
pwrite(27,...,8192,0xc000) = 8192 (0x2000)
pread(27,...,8192,0x4c000) = 8192 (0x2000)
pwrite(27,...,8192,0xe000) = 8192 (0x2000)
pread(27,...,8192,0x4e000) = 8192 (0x2000)
pwrite(27,...,8192,0x1) = 8192 (0x2000)
pread(27,...,8192,0x5) = 8192 (0x2000)
pwrite(27,...,8192,0x12000) = 8192 (0x2000)
pread(27,...,8192,0x52000) = 8192 (0x2000)
pwrite(27,...,8192,0x14000) = 8192 (0x2000)
pread(27,...,8192,0x54000) = 8192 (0x2000)
pwrite(27,...,8192,0x16000) = 8192 (0x2000)
pread(27,...,8192,0x56000) = 8192 (0x2000)
pwrite(27,...,8192,0x18000) = 8192 (0x2000)
pread(27,...,8192,0x58000) = 8192 (0x2000)
pwrite(27,...,8192,0x1a000) = 8192 (0x2000)
pread(27,...,8192,0x5a000) = 8192 (0x2000)
pwrite(27,...,8192,0x1c000) = 8192 (0x2000)
pread(27,...,8192,0x5c000) = 8192 (0x2000)
pwrite(27,...,8192,0x1e000) = 8192 (0x2000)
pread(27,...,8192,0x5e000) = 8192 (0x2000)
pwrite(27,...,8192,0x2) = 8192 (0x2000)
pread(27,...,8192,0x6) = 8192 (0x2000)
pwrite(27,...,8192,0x22000) = 8192 (0x2000)
pread(27,...,8192,0x62000) = 8192 (0x2000)
pwrite(27,...,8192,0x24000) = 8192 (0x2000)
pread(27,...,8192,0x64000) = 8192 (0x2000)
pwrite(27,...,8192,0x26000) = 8192 (0x2000)
pread(27,...,8192,0x66000) = 8192 (0x2000)
pwrite(27,...,8192,0x28000) = 8192 (0x2000)
pread(27,...,8192,0x68000) = 8192 (0x2000)
pwrite(27,...,8192,0x2a000) = 8192 (0x2000)
pread(27,...,8192,0x6a000) = 8192 (0x2000)
pwrite(27,...,8192,0x2c000) = 8192 (0x2000)
pread(27,...,8192,0x6c000) = 8192 (0x2000)
pwrite(27,...,8192,0x2e000) = 8192 (0x2000)
pread(27,...,8192,0x6e000) = 8192 (0x2000)
pwrite(27,...,8192,0x3) = 8192 (0x2000)
pread(27,...,8192,0x7) = 8192 (0x2000)
pwrite(27,...,8192,0x32000) = 8192 (0x2000)
pread(27,...,8192,0x72000) = 8192 (0x2000)
pwrite(27,...,8192,0x34000) = 8192 (0x2000)
pread(27,...,8192,0x74000) = 8192 (0x2000)
pwrite(27,...,8192,0x36000) = 8192 (0x2000)
pread(27,...,8192,0x76000) = 8192 (0x2000)
pwrite(27,...,8192,0x38000) = 8192 (0x2000)
pread(27,...,8192,0x78000) = 8192 (0x2000)
pwrite(27,...,8192,0x3a000) = 8192 (0x2000)
pread(27,...,8192,0x7a000) = 8192 (0x2000)
pwrite(27,...,8192,0x3c000) = 8192 (0x2000)
pread(27,...,8192,0x7c000) = 8192 (0x2000)
pwrite(27,...,8192,0x3e000) = 8192 (0x2000)
pread(27,...,8192,0x7e000) = 8192 (0x2000)

(Digression: this alternative tail-write-head-read pattern defeats the
read-ahead and write-behind on a bunch of OSes, but not Linux because
it only seems to worry about the reads, while other Unixes have
write-behind detection too, and I believe at least some are confused
by this pattern of tiny writes following along some distance behind
tiny reads; Andrew Gierth figured that out afte

Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-02 Thread Melanie Plageman
On Sun, Dec 31, 2023 at 1:28 PM Melanie Plageman
 wrote:
>
> There are a few comments that still need to be updated. I also noticed I
> needed to reorder and combine a couple of the commits. I wanted to
> register this for the january commitfest, so I didn't quite have time
> for the finishing touches.

I've updated this patch set to remove a commit that didn't make sense
on its own and do various other cleanup.

- Melanie
From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Sat, 30 Dec 2023 16:59:27 -0500
Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip

In preparation for vacuum to use the streaming read interface (and eventually
AIO), refactor vacuum's logic for skipping blocks such that it is entirely
confined to lazy_scan_skip(). This turns lazy_scan_skip() and the VacSkipState
it uses into an iterator which yields blocks to lazy_scan_heap(). Such a
structure is conducive to an async interface.

By always calling lazy_scan_skip() -- instead of only when we have reached the
next unskippable block, we no longer need the skipping_current_range variable.
lazy_scan_heap() no longer needs to manage the skipped range -- checking if we
reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()
can derive the visibility status of a block from whether or not we are in a
skippable range -- that is, whether or not the next_block is equal to the next
unskippable block.
---
 src/backend/access/heap/vacuumlazy.c | 233 ++-
 1 file changed, 120 insertions(+), 113 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index e3827a5e4d3..42da4ac64f8 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -250,14 +250,13 @@ typedef struct VacSkipState
 	Buffer		vmbuffer;
 	/* Next unskippable block's visibility status */
 	bool		next_unskippable_allvis;
-	/* Whether or not skippable blocks should be skipped */
-	bool		skipping_current_range;
 } VacSkipState;
 
 /* non-export function prototypes */
 static void lazy_scan_heap(LVRelState *vacrel);
-static void lazy_scan_skip(LVRelState *vacrel, VacSkipState *vacskip,
-		   BlockNumber next_block);
+static BlockNumber lazy_scan_skip(LVRelState *vacrel, VacSkipState *vacskip,
+  BlockNumber blkno,
+  bool *all_visible_according_to_vm);
 static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
    BlockNumber blkno, Page page,
    bool sharelock, Buffer vmbuffer);
@@ -838,9 +837,15 @@ static void
 lazy_scan_heap(LVRelState *vacrel)
 {
 	BlockNumber rel_pages = vacrel->rel_pages,
-blkno,
 next_fsm_block_to_vacuum = 0;
-	VacSkipState vacskip = {.vmbuffer = InvalidBuffer};
+	bool		all_visible_according_to_vm;
+
+	/* relies on InvalidBlockNumber overflowing to 0 */
+	BlockNumber blkno = InvalidBlockNumber;
+	VacSkipState vacskip = {
+		.next_unskippable_block = InvalidBlockNumber,
+		.vmbuffer = InvalidBuffer
+	};
 	VacDeadItems *dead_items = vacrel->dead_items;
 	const int	initprog_index[] = {
 		PROGRESS_VACUUM_PHASE,
@@ -855,37 +860,17 @@ lazy_scan_heap(LVRelState *vacrel)
 	initprog_val[2] = dead_items->max_items;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
-	/* Set up an initial range of skippable blocks using the visibility map */
-	lazy_scan_skip(vacrel, &vacskip, 0);
-	for (blkno = 0; blkno < rel_pages; blkno++)
+	while (true)
 	{
 		Buffer		buf;
 		Page		page;
-		bool		all_visible_according_to_vm;
 		LVPagePruneState prunestate;
 
-		if (blkno == vacskip.next_unskippable_block)
-		{
-			/*
-			 * Can't skip this page safely.  Must scan the page.  But
-			 * determine the next skippable range after the page first.
-			 */
-			all_visible_according_to_vm = vacskip.next_unskippable_allvis;
-			lazy_scan_skip(vacrel, &vacskip, blkno + 1);
-
-			Assert(vacskip.next_unskippable_block >= blkno + 1);
-		}
-		else
-		{
-			/* Last page always scanned (may need to set nonempty_pages) */
-			Assert(blkno < rel_pages - 1);
-
-			if (vacskip.skipping_current_range)
-continue;
+		blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,
+			   &all_visible_according_to_vm);
 
-			/* Current range is too small to skip -- just scan the page */
-			all_visible_according_to_vm = true;
-		}
+		if (blkno == InvalidBlockNumber)
+			break;
 
 		vacrel->scanned_pages++;
 
@@ -1287,20 +1272,13 @@ lazy_scan_heap(LVRelState *vacrel)
 }
 
 /*
- *	lazy_scan_skip() -- set up range of skippable blocks using visibility map.
- *
- * lazy_scan_heap() calls here every time it needs to set up a new range of
- * blocks to skip via the visibility map.  Caller passes next_block, the next
- * block in line. The parameters of the skipped range are recorded in vacskip.
- * vacrel is an in/out parameter here; vacuum options and information about the
- * relation are read and vacrel->skippedallvis is set to ensure we don't
- * advance r

Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-04 Thread Andres Freund
Hi,

On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages
> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
>  nskippable_blocks

I think these may lead to worse code - the compiler has to reload
vacrel->rel_pages/next_unskippable_block for every loop iteration, because it
can't guarantee that they're not changed within one of the external functions
called in the loop body.

> Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state
> 
> Future commits will remove all skipping logic from lazy_scan_heap() and
> confine it to lazy_scan_skip(). To make those commits more clear, first
> introduce the struct, VacSkipState, which will maintain the variables
> needed to skip ranges less than SKIP_PAGES_THRESHOLD.

Why not add this to LVRelState, possibly as a struct embedded within it?


> From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001
> From: Melanie Plageman 
> Date: Sat, 30 Dec 2023 16:59:27 -0500
> Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip
> 
> In preparation for vacuum to use the streaming read interface (and eventually
> AIO), refactor vacuum's logic for skipping blocks such that it is entirely
> confined to lazy_scan_skip(). This turns lazy_scan_skip() and the VacSkipState
> it uses into an iterator which yields blocks to lazy_scan_heap(). Such a
> structure is conducive to an async interface.

And it's cleaner - I find the current code extremely hard to reason about.


> By always calling lazy_scan_skip() -- instead of only when we have reached the
> next unskippable block, we no longer need the skipping_current_range variable.
> lazy_scan_heap() no longer needs to manage the skipped range -- checking if we
> reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()
> can derive the visibility status of a block from whether or not we are in a
> skippable range -- that is, whether or not the next_block is equal to the next
> unskippable block.

I wonder if it should be renamed as part of this - the name is somewhat
confusing now (and perhaps before)? lazy_scan_get_next_block() or such?


> + while (true)
>   {
>   Buffer  buf;
>   Pagepage;
> - boolall_visible_according_to_vm;
>   LVPagePruneState prunestate;
>  
> - if (blkno == vacskip.next_unskippable_block)
> - {
> - /*
> -  * Can't skip this page safely.  Must scan the page.  
> But
> -  * determine the next skippable range after the page 
> first.
> -  */
> - all_visible_according_to_vm = 
> vacskip.next_unskippable_allvis;
> - lazy_scan_skip(vacrel, &vacskip, blkno + 1);
> -
> - Assert(vacskip.next_unskippable_block >= blkno + 1);
> - }
> - else
> - {
> - /* Last page always scanned (may need to set 
> nonempty_pages) */
> - Assert(blkno < rel_pages - 1);
> -
> - if (vacskip.skipping_current_range)
> - continue;
> + blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,
> +
> &all_visible_according_to_vm);
>  
> - /* Current range is too small to skip -- just scan the 
> page */
> - all_visible_according_to_vm = true;
> - }
> + if (blkno == InvalidBlockNumber)
> + break;
>  
>   vacrel->scanned_pages++;
>

I don't like that we still do determination about the next block outside of
lazy_scan_skip() and have duplicated exit conditions between lazy_scan_skip()
and lazy_scan_heap().

I'd probably change the interface to something like

while (lazy_scan_get_next_block(vacrel, &blkno))
{
...
}


> From b6603e35147c4bbe3337280222e6243524b0110e Mon Sep 17 00:00:00 2001
> From: Melanie Plageman 
> Date: Sun, 31 Dec 2023 09:47:18 -0500
> Subject: [PATCH v2 5/6] VacSkipState saves reference to LVRelState
> 
> The streaming read interface can only give pgsr_next callbacks access to
> two pieces of private data. As such, move a reference to the LVRelState
> into the VacSkipState.
> 
> This is a separate commit (as opposed to as part of the commit
> introducing VacSkipState) because it is required for using the streaming
> read interface but not a natural change on its own. VacSkipState is per
> block and the LVRelState is referenced for the whole relation vacuum.

I'd do it the other way round, i.e. either embed VacSkipState ino LVRelState
or point to it from VacSkipState.

LVRelState is already tied to the iteration state, so I don't think there's a
reason not to do so.

Greetings,

Andres Freund




Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-04 Thread Jim Nasby


  
  
On 1/4/24 2:23 PM, Andres Freund wrote:


  On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages
Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
 nskippable_blocks
I think these may lead to worse code - the compiler has to reload
vacrel->rel_pages/next_unskippable_block for every loop iteration, because it
can't guarantee that they're not changed within one of the external functions
called in the loop body.

Admittedly I'm not up to speed on recent vacuum changes, but I
  have to wonder if the concept of skipping should go away in the
  context of vector IO? Instead of thinking about "we can skip this
  range of blocks", why not maintain a list of "here's the next X
  number of blocks that we need to vacuum"?

-- 
Jim Nasby, Data Architect, Austin TX
  





Re: Confine vacuum skip logic to lazy_scan_skip

2024-01-05 Thread Nazir Bilal Yavuz
Hi,

On Fri, 5 Jan 2024 at 02:25, Jim Nasby  wrote:
>
> On 1/4/24 2:23 PM, Andres Freund wrote:
>
> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:
>
> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages
> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var
>  nskippable_blocks
>
> I think these may lead to worse code - the compiler has to reload
> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it
> can't guarantee that they're not changed within one of the external functions
> called in the loop body.
>
> Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder 
> if the concept of skipping should go away in the context of vector IO? 
> Instead of thinking about "we can skip this range of blocks", why not 
> maintain a list of "here's the next X number of blocks that we need to 
> vacuum"?

Sorry if I misunderstood. AFAIU, with the help of the vectored IO;
"the next X number of blocks that need to be vacuumed" will be
prefetched by calculating the unskippable blocks ( using the
lazy_scan_skip() function ) and the X will be determined by Postgres
itself. Do you have something different in your mind?

-- 
Regards,
Nazir Bilal Yavuz
Microsoft




Re: Confine vacuum skip logic to lazy_scan_skip

2024-06-28 Thread Melanie Plageman
On Sun, Mar 17, 2024 at 2:53 AM Thomas Munro  wrote:
>
> On Tue, Mar 12, 2024 at 10:03 AM Melanie Plageman
>  wrote:
> > I've rebased the attached v10 over top of the changes to
> > lazy_scan_heap() Heikki just committed and over the v6 streaming read
> > patch set. I started testing them and see that you are right, we no
> > longer pin too many buffers. However, the uncached example below is
> > now slower with streaming read than on master -- it looks to be
> > because it is doing twice as many WAL writes and syncs. I'm still
> > investigating why that is.

--snip--

> 4. For learning/exploration only, I rebased my experimental vectored
> FlushBuffers() patch, which teaches the checkpointer to write relation
> data out using smgrwritev().  The checkpointer explicitly sorts
> blocks, but I think ring buffers should naturally often contain
> consecutive blocks in ring order.  Highly experimental POC code pushed
> to a public branch[2], but I am not proposing anything here, just
> trying to understand things.  The nicest looking system call trace was
> with BUFFER_USAGE_LIMIT set to 512kB, so it could do its writes, reads
> and WAL writes 128kB at a time:
>
> pwrite(32,...,131072,0xfc6000) = 131072 (0x2)
> fdatasync(32) = 0 (0x0)
> pwrite(27,...,131072,0x6c) = 131072 (0x2)
> pread(27,...,131072,0x73e000) = 131072 (0x2)
> pwrite(27,...,131072,0x6e) = 131072 (0x2)
> pread(27,...,131072,0x75e000) = 131072 (0x2)
> pwritev(27,[...],3,0x77e000) = 131072 (0x2)
> preadv(27,[...],3,0x77e000) = 131072 (0x2)
>
> That was a fun experiment, but... I recognise that efficient cleaning
> of ring buffers is a Hard Problem requiring more concurrency: it's
> just too late to be flushing that WAL.  But we also don't want to
> start writing back data immediately after dirtying pages (cf. OS
> write-behind for big sequential writes in traditional Unixes), because
> we're not allowed to write data out without writing the WAL first and
> we currently need to build up bigger WAL writes to do so efficiently
> (cf. some other systems that can write out fragments of WAL
> concurrently so the latency-vs-throughput trade-off doesn't have to be
> so extreme).  So we want to defer writing it, but not too long.  We
> need something cleaning our buffers (or at least flushing the
> associated WAL, but preferably also writing the data) not too late and
> not too early, and more in sync with our scan than the WAL writer is.
> What that machinery should look like I don't know (but I believe
> Andres has ideas).

I've attached a WIP v11 streaming vacuum patch set here that is
rebased over master (by Thomas), so that I could add a CF entry for
it. It still has the problem with the extra WAL write and fsync calls
investigated by Thomas above. Thomas has some work in progress doing
streaming write-behind to alleviate the issues with the buffer access
strategy and streaming reads. When he gets a version of that ready to
share, he will start a new "Streaming Vacuum" thread.

- Melanie
From 050b6c3fa73c9153aeef58fcd306533c1008802e Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Fri, 26 Apr 2024 08:32:44 +1200
Subject: [PATCH v11 2/3] Refactor tidstore.c memory management.

Previously, TidStoreIterateNext() would expand the set of offsets for
each block into a buffer that it overwrote each time.  In order to be
able to collect the offsets for multiple blocks before working with
them, change the contract.  Now, the offsets are obtained by a separate
call to TidStoreGetBlockOffsets(), which can be called at a later time,
and TidStoreIteratorResult objects are safe to copy and store in a
queue.

This will be used by a later patch, to avoid the need for expensive
extra copies of offset array and associated memory management.
---
 src/backend/access/common/tidstore.c  | 68 +--
 src/backend/access/heap/vacuumlazy.c  |  9 ++-
 src/include/access/tidstore.h | 12 ++--
 .../modules/test_tidstore/test_tidstore.c |  9 ++-
 4 files changed, 53 insertions(+), 45 deletions(-)

diff --git a/src/backend/access/common/tidstore.c b/src/backend/access/common/tidstore.c
index fb3949d69f6..c3c1987204b 100644
--- a/src/backend/access/common/tidstore.c
+++ b/src/backend/access/common/tidstore.c
@@ -147,9 +147,6 @@ struct TidStoreIter
 	TidStoreIterResult output;
 };
 
-static void tidstore_iter_extract_tids(TidStoreIter *iter, BlockNumber blkno,
-	   BlocktableEntry *page);
-
 /*
  * Create a TidStore. The TidStore will live in the memory context that is
  * CurrentMemoryContext at the time of this call. The TID storage, backed
@@ -486,13 +483,6 @@ TidStoreBeginIterate(TidStore *ts)
 	iter = palloc0(sizeof(TidStoreIter));
 	iter->ts = ts;
 
-	/*
-	 * We start with an array large enough to contain at least the offsets
-	 * from one completely full bitmap element.
-	 */
-	iter->output.max_offset = 2 * BITS_PER_BITMAPWORD;
-	iter->output.offsets = palloc(sizeof(OffsetNumber) * iter

Re: Confine vacuum skip logic to lazy_scan_skip

2024-07-07 Thread Noah Misch
On Fri, Jun 28, 2024 at 05:36:25PM -0400, Melanie Plageman wrote:
> I've attached a WIP v11 streaming vacuum patch set here that is
> rebased over master (by Thomas), so that I could add a CF entry for
> it. It still has the problem with the extra WAL write and fsync calls
> investigated by Thomas above. Thomas has some work in progress doing
> streaming write-behind to alleviate the issues with the buffer access
> strategy and streaming reads. When he gets a version of that ready to
> share, he will start a new "Streaming Vacuum" thread.

To avoid reviewing the wrong patch, I'm writing to verify the status here.
This is Needs Review in the commitfest.  I think one of these two holds:

1. Needs Review is valid.
2. It's actually Waiting on Author.  You're commissioning a review of the
   future-thread patch, not this one.

If it's (1), given the WIP marking, what is the scope of the review you seek?
I'm guessing performance is out of scope; what else is in or out of scope?




Re: Confine vacuum skip logic to lazy_scan_skip

2024-07-08 Thread Melanie Plageman
On Sun, Jul 7, 2024 at 10:49 AM Noah Misch  wrote:
>
> On Fri, Jun 28, 2024 at 05:36:25PM -0400, Melanie Plageman wrote:
> > I've attached a WIP v11 streaming vacuum patch set here that is
> > rebased over master (by Thomas), so that I could add a CF entry for
> > it. It still has the problem with the extra WAL write and fsync calls
> > investigated by Thomas above. Thomas has some work in progress doing
> > streaming write-behind to alleviate the issues with the buffer access
> > strategy and streaming reads. When he gets a version of that ready to
> > share, he will start a new "Streaming Vacuum" thread.
>
> To avoid reviewing the wrong patch, I'm writing to verify the status here.
> This is Needs Review in the commitfest.  I think one of these two holds:
>
> 1. Needs Review is valid.
> 2. It's actually Waiting on Author.  You're commissioning a review of the
>future-thread patch, not this one.
>
> If it's (1), given the WIP marking, what is the scope of the review you seek?
> I'm guessing performance is out of scope; what else is in or out of scope?

Ah, you're right. I moved it to "Waiting on Author" as we are waiting
on Thomas' version which has a fix for the extra WAL write/sync
behavior.

Sorry for the "Needs Review" noise!

- Melanie




Re: Confine vacuum skip logic to lazy_scan_skip

2024-07-14 Thread Thomas Munro
On Mon, Jul 8, 2024 at 2:49 AM Noah Misch  wrote:
> what is the scope of the review you seek?

The patch "Refactor tidstore.c memory management." could definitely
use some review.  I wasn't sure if that should be proposed in a new
thread of its own, but then the need for it comes from this
streamifying project, so...  The basic problem was that we want to
build up a stream of block to be vacuumed (so that we can perform the
I/O combining etc) + some extra data attached to each buffer, in this
case the TID list, but the implementation of tidstore.c in master
would require us to make an extra intermediate copy of the TIDs,
because it keeps overwriting its internal buffer.  The proposal here
is to make it so that you can get get a tiny copyable object that can
later be used to retrieve the data into a caller-supplied buffer, so
that tidstore.c's iterator machinery doesn't have to have its own
internal buffer at all, and then calling code can safely queue up a
few of these at once.




Re: Confine vacuum skip logic to lazy_scan_skip

2024-07-15 Thread Noah Misch
On Mon, Jul 15, 2024 at 03:26:32PM +1200, Thomas Munro wrote:
> On Mon, Jul 8, 2024 at 2:49 AM Noah Misch  wrote:
> > what is the scope of the review you seek?
> 
> The patch "Refactor tidstore.c memory management." could definitely
> use some review.

That's reasonable.  radixtree already forbids mutations concurrent with
iteration, so there's no new concurrency hazard.  One alternative is
per_buffer_data big enough for MaxOffsetNumber, but that might thrash caches
measurably.  That patch is good to go apart from these trivialities:

> - return &(iter->output);
> + return &iter->output;

This cosmetic change is orthogonal to the patch's mission.

> - for (wordnum = 0; wordnum < page->header.nwords; wordnum++)
> + for (int wordnum = 0; wordnum < page->header.nwords; wordnum++)

Likewise.




Re: Confine vacuum skip logic to lazy_scan_skip

2024-07-23 Thread Thomas Munro
On Tue, Jul 16, 2024 at 1:52 PM Noah Misch  wrote:
> On Mon, Jul 15, 2024 at 03:26:32PM +1200, Thomas Munro wrote:
> That's reasonable.  radixtree already forbids mutations concurrent with
> iteration, so there's no new concurrency hazard.  One alternative is
> per_buffer_data big enough for MaxOffsetNumber, but that might thrash caches
> measurably.  That patch is good to go apart from these trivialities:

Thanks!  I have pushed that patch, without those changes you didn't like.

Here's are Melanie's patches again.  They work, and the WAL flush
frequency problem is mostly gone since we increased the BAS_VACUUM
default ring size (commit 98f320eb), but I'm still looking into how
this read-ahead and the write-behind generated by vacuum (using
patches not yet posted) should interact with each other and the ring
system, and bouncing ideas around about that with my colleagues.  More
on that soon, hopefully.  I suspect that there won't be changes to
these patches as a result, but I still want to hold off for a bit.
From ac826d0187252bf446fb5f12489def5208d20289 Mon Sep 17 00:00:00 2001
From: Melanie Plageman 
Date: Mon, 11 Mar 2024 16:19:56 -0400
Subject: [PATCH v12 1/2] Use streaming I/O in VACUUM first pass.

Now vacuum's first pass, which HOT-prunes and records the TIDs of
non-removable dead tuples, uses the streaming read API by converting
heap_vac_scan_next_block() to a read stream callback.

Author: Melanie Plageman 
---
 src/backend/access/heap/vacuumlazy.c | 80 +---
 1 file changed, 49 insertions(+), 31 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 835b53415d0..d92fac7e7e3 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -55,6 +55,7 @@
 #include "storage/bufmgr.h"
 #include "storage/freespace.h"
 #include "storage/lmgr.h"
+#include "storage/read_stream.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/pg_rusage.h"
@@ -229,8 +230,9 @@ typedef struct LVSavedErrInfo
 
 /* non-export function prototypes */
 static void lazy_scan_heap(LVRelState *vacrel);
-static bool heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
-	 bool *all_visible_according_to_vm);
+static BlockNumber heap_vac_scan_next_block(ReadStream *stream,
+			void *callback_private_data,
+			void *per_buffer_data);
 static void find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis);
 static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
    BlockNumber blkno, Page page,
@@ -815,10 +817,11 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 static void
 lazy_scan_heap(LVRelState *vacrel)
 {
+	Buffer		buf;
+	ReadStream *stream;
 	BlockNumber rel_pages = vacrel->rel_pages,
-blkno,
 next_fsm_block_to_vacuum = 0;
-	bool		all_visible_according_to_vm;
+	bool	   *all_visible_according_to_vm;
 
 	TidStore   *dead_items = vacrel->dead_items;
 	VacDeadItemsInfo *dead_items_info = vacrel->dead_items_info;
@@ -836,19 +839,33 @@ lazy_scan_heap(LVRelState *vacrel)
 	initprog_val[2] = dead_items_info->max_bytes;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
+	stream = read_stream_begin_relation(READ_STREAM_MAINTENANCE,
+		vacrel->bstrategy,
+		vacrel->rel,
+		MAIN_FORKNUM,
+		heap_vac_scan_next_block,
+		vacrel,
+		sizeof(bool));
+
 	/* Initialize for the first heap_vac_scan_next_block() call */
 	vacrel->current_block = InvalidBlockNumber;
 	vacrel->next_unskippable_block = InvalidBlockNumber;
 	vacrel->next_unskippable_allvis = false;
 	vacrel->next_unskippable_vmbuffer = InvalidBuffer;
 
-	while (heap_vac_scan_next_block(vacrel, &blkno, &all_visible_according_to_vm))
+	while (BufferIsValid(buf = read_stream_next_buffer(stream,
+	   (void **) &all_visible_according_to_vm)))
 	{
-		Buffer		buf;
+		BlockNumber blkno;
 		Page		page;
 		bool		has_lpdead_items;
 		bool		got_cleanup_lock = false;
 
+		vacrel->blkno = blkno = BufferGetBlockNumber(buf);
+
+		CheckBufferIsPinnedOnce(buf);
+		page = BufferGetPage(buf);
+
 		vacrel->scanned_pages++;
 
 		/* Report as block scanned, update error traceback information */
@@ -914,10 +931,6 @@ lazy_scan_heap(LVRelState *vacrel)
 		 */
 		visibilitymap_pin(vacrel->rel, blkno, &vmbuffer);
 
-		buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL,
- vacrel->bstrategy);
-		page = BufferGetPage(buf);
-
 		/*
 		 * We need a buffer cleanup lock to prune HOT chains and defragment
 		 * the page in lazy_scan_prune.  But when it's not possible to acquire
@@ -973,7 +986,7 @@ lazy_scan_heap(LVRelState *vacrel)
 		 */
 		if (got_cleanup_lock)
 			lazy_scan_prune(vacrel, buf, blkno, page,
-			vmbuffer, all_visible_according_to_vm,
+			vmbuffer, *all_visible_according_to_vm,
 			&has_lpdead_items);
 
 		/*
@@ -1027,7 +1040,7 @@ lazy_scan_heap(LVRelState *vacrel)
 		ReleaseBu