On Wed, Jan 24, 2024 at 4:34 PM Melanie Plageman <melanieplage...@gmail.com> wrote: > > On Wed, Jan 24, 2024 at 2:59 PM Robert Haas <robertmh...@gmail.com> wrote: ... > > If you'd like, I can try rewriting these comments to my satisfaction > > and you can reverse-review the result. Or you can rewrite them and > > I'll re-review the result. But I think this needs to be a little less > > mechanical. It's not just about shuffling all the comments around so > > that all the text ends up somewhere -- we also need to consider the > > degree to which the meaning becomes duplicated when it all gets merged > > together. > > I will take a stab at rewriting the comments myself first. Usually, I > try to avoid changing comments if the code isn't functionally > different because I know it adds additional review overhead and I try > to reduce that to an absolute minimum. However, I see what you are > saying and agree that it would be better to have actually good > comments instead of frankenstein comments made up of parts that were > previously considered acceptable. I'll have a new version ready by > tomorrow.
v12 attached has my attempt at writing better comments for this section of lazy_scan_heap(). Above the combined FSM update code, I have written a comment that is a revised version of your comment from above the lazy_scan_noprune() FSM update code but with some of the additional details from the previous comment above the lazy_scan_pruen() FSM update code. The one part that I did not incorporate was the point about how sometimes we think we'll do a second pass on the block so we don't update the FSM but then we end up not doing it but it's all okay. * Note: It's not in fact 100% certain that we really will call * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index * vacuuming (and so must skip heap vacuuming). This is deemed okay * because it only happens in emergencies, or when there is very * little free space anyway. (Besides, we start recording free space * in the FSM once index vacuuming has been abandoned.) I didn't incorporate it because I wasn't sure I understood the situation. I can imagine us skipping updating the FSM after lazy_scan_prune() because there are indexes on the relation and dead items on the page and we think we'll do a second pass. Later, we end up triggering a failsafe vacuum or, somehow, there are still too few TIDs for the second pass, so we update do_index_vacuuming to false. Then we wouldn't ever record this block's free space in the FSM. That seems fine (which is what the comment says). So, what does the last sentence mean? "Besides, we start recording..." - Melanie
From fabf79ed6f5866634a324f740e2eb0a5530b3286 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Tue, 16 Jan 2024 17:28:07 -0500 Subject: [PATCH v12] Combine FSM updates for prune and no prune cases lazy_scan_prune() and lazy_scan_noprune() update the freespace map with identical conditions -- both with the goal of doing so once for each page vacuumed. Combine both of their FSM updates. This consolidation is easier now that cb970240f13df2 moved visibility map updates into lazy_scan_prune(). While combining the FSM updates, simplify the logic for calling lazy_scan_new_or_empty() and lazy_scan_noprune(). Discussion: https://postgr.es/m/CA%2BTgmoaLTvipm%3Dxx4rJLr07m908PCu%3DQH3uCjD1UOn8YaEuO2g%40mail.gmail.com --- src/backend/access/heap/vacuumlazy.c | 133 +++++++++++---------------- 1 file changed, 52 insertions(+), 81 deletions(-) diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index 0fb3953513d..432db4698bc 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -838,6 +838,7 @@ lazy_scan_heap(LVRelState *vacrel) Page page; bool all_visible_according_to_vm; bool has_lpdead_items; + bool got_cleanup_lock = false; if (blkno == next_unskippable_block) { @@ -931,63 +932,41 @@ lazy_scan_heap(LVRelState *vacrel) */ visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); + buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, + vacrel->bstrategy); + page = BufferGetPage(buf); + /* * We need a buffer cleanup lock to prune HOT chains and defragment * the page in lazy_scan_prune. But when it's not possible to acquire * a cleanup lock right away, we may be able to settle for reduced * processing using lazy_scan_noprune. */ - buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, - vacrel->bstrategy); - page = BufferGetPage(buf); - if (!ConditionalLockBufferForCleanup(buf)) - { - LockBuffer(buf, BUFFER_LOCK_SHARE); + got_cleanup_lock = ConditionalLockBufferForCleanup(buf); - /* Check for new or empty pages before lazy_scan_noprune call */ - if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, true, - vmbuffer)) - { - /* Processed as new/empty page (lock and pin released) */ - continue; - } - - /* - * Collect LP_DEAD items in dead_items array, count tuples, - * determine if rel truncation is safe - */ - if (lazy_scan_noprune(vacrel, buf, blkno, page, &has_lpdead_items)) - { - Size freespace = 0; - bool recordfreespace; + if (!got_cleanup_lock) + LockBuffer(buf, BUFFER_LOCK_SHARE); - /* - * We processed the page successfully (without a cleanup - * lock). - * - * Update the FSM, just as we would in the case where - * lazy_scan_prune() is called. Our goal is to update the - * freespace map the last time we touch the page. If the - * relation has no indexes, or if index vacuuming is disabled, - * there will be no second heap pass; if this particular page - * has no dead items, the second heap pass will not touch this - * page. So, in those cases, update the FSM now. - * - * After a call to lazy_scan_prune(), we would also try to - * adjust the page-level all-visible bit and the visibility - * map, but we skip that step in this path. - */ - recordfreespace = vacrel->nindexes == 0 - || !vacrel->do_index_vacuuming - || !has_lpdead_items; - if (recordfreespace) - freespace = PageGetHeapFreeSpace(page); - UnlockReleaseBuffer(buf); - if (recordfreespace) - RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); - continue; - } + /* Check for new or empty pages before lazy_scan_[no]prune call */ + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock, + vmbuffer)) + { + /* Processed as new/empty page (lock and pin released) */ + continue; + } + /* + * If we didn't get the cleanup lock and the page is not new or empty, + * we can still collect LP_DEAD items in the dead_items array for + * later vacuuming, count live and recently dead tuples for vacuum + * logging, and determine if this block could later be truncated. If + * we encounter any xid/mxids that require advancing the + * relfrozenxid/relminxid, we'll have to wait for a cleanup lock and + * call lazy_scan_prune(). + */ + if (!got_cleanup_lock && + !lazy_scan_noprune(vacrel, buf, blkno, page, &has_lpdead_items)) + { /* * lazy_scan_noprune could not do all required processing. Wait * for a cleanup lock, and call lazy_scan_prune in the usual way. @@ -995,45 +974,36 @@ lazy_scan_heap(LVRelState *vacrel) Assert(vacrel->aggressive); LockBuffer(buf, BUFFER_LOCK_UNLOCK); LockBufferForCleanup(buf); - } - - /* Check for new or empty pages before lazy_scan_prune call */ - if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, false, vmbuffer)) - { - /* Processed as new/empty page (lock and pin released) */ - continue; + got_cleanup_lock = true; } /* - * Prune, freeze, and count tuples. + * If we got a cleanup lock, we can prune and freeze tuples and + * defragment the page. If we didn't get a cleanup lock, we will still + * consider whether or not to update the FSM. * - * Accumulates details of remaining LP_DEAD line pointers on page in - * dead_items array. This includes LP_DEAD line pointers that we - * pruned ourselves, as well as existing LP_DEAD line pointers that - * were pruned some time earlier. Also considers freezing XIDs in the - * tuple headers of remaining items with storage. It also determines - * if truncating this block is safe. + * Like lazy_scan_noprune(), lazy_scan_prune() will count + * recently_dead_tuples and live tuples for vacuum logging, determine + * if the block can later be truncated, and accumulate the details of + * remaining LP_DEAD line pointers on the page in the dead_items + * array. These dead items include those pruned by lazy_scan_prune() + * as well we line pointers previously marked LP_DEAD. */ - lazy_scan_prune(vacrel, buf, blkno, page, - vmbuffer, all_visible_according_to_vm, - &has_lpdead_items); + if (got_cleanup_lock) + lazy_scan_prune(vacrel, buf, blkno, page, + vmbuffer, all_visible_according_to_vm, + &has_lpdead_items); /* - * Final steps for block: drop cleanup lock, record free space in the - * FSM. - * - * If we will likely do index vacuuming, wait until - * lazy_vacuum_heap_rel() to save free space. This doesn't just save - * us some cycles; it also allows us to record any additional free - * space that lazy_vacuum_heap_page() will make available in cases - * where it's possible to truncate the page's line pointer array. + * Now drop the buffer lock and, potentially, update the FSM. * - * Note: It's not in fact 100% certain that we really will call - * lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index - * vacuuming (and so must skip heap vacuuming). This is deemed okay - * because it only happens in emergencies, or when there is very - * little free space anyway. (Besides, we start recording free space - * in the FSM once index vacuuming has been abandoned.) + * Our goal is to update the freespace map the last time we touch the + * page. If we'll process a block in the second pass, we may free up + * additional space on the page, so it is better to update the FSM + * after the second pass. If the relation has no indexes, or if index + * vacuuming is disabled, there will be no second heap pass; if this + * particular page has no dead items, the second heap pass will not + * touch this page. So, in those cases, update the FSM now. */ if (vacrel->nindexes == 0 || !vacrel->do_index_vacuuming @@ -1047,9 +1017,10 @@ lazy_scan_heap(LVRelState *vacrel) /* * Periodically perform FSM vacuuming to make newly-freed space * visible on upper FSM pages. This is done after vacuuming if the - * table has indexes. + * table has indexes. There will only be newly-freed space if we + * held the cleanup lock and lazy_scan_prune() was called. */ - if (vacrel->nindexes == 0 && has_lpdead_items && + if (got_cleanup_lock && vacrel->nindexes == 0 && has_lpdead_items && blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES) { FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, -- 2.37.2