As discussed in the vacuum ring buffer thread: Make Vacuum update the FSM more frequently, to avoid the case where autovacuum fails to reach the point where it updates the FSM in highly contended tables.
Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids recursing into branches that already contain enough free space, to avoid having to traverse the whole FSM and thus induce quadratic costs. Intermediate FSM vacuums are only supposed to make enough free space visible to avoid extension until the final (non-partial) FSM vacuum. Run partial FSM vacuums after each index pass, which is a point at which whole ranges of the heap have been thorougly cleaned, and we can expect no further updates to those ranges of the FSM save for concurrent activity. When there are no indexes, and thus no index passes, run partial FSM vacuums every 8GB of dirtied pages or 1/8th of the relation, whichever is highest. This allows some partial work to be made visible without incurring quadratic cost. In any case, FSM are small in relation to the table, so even when quadratic cost is involved, it should not be problematic. Index passes already incur quadratic cost, and the addition of the FSM is unlikely to be measurable. Run a partial FSM vacuum with a low pruning threshold right at the beginning of the VACUUM to finish up any work left over from prior canceled vacuum runs, something that is common in highly contended tables when running only autovacuum. Autovacuum canceling is thus handled by updating the FSM first-thing when autovacuum retries vacuuming a relation. I attempted to add an autovacuum work item for performing the FSM update shortly after the cancel, but that had some issues so I abandoned the approach. For one, it would sometimes crash when adding the work item from inside autovacuum itself. I didn't find the cause of the crash, but I suspect AutoVacuumRequestWork was being called in a context where it was not safe. In any case, the way work items work didn't seem like it would have worked for our purposes anyway, since they will only ever be processed after processing all tables in the database, something that could take ages, the work items are limited to 256, which would make the approach troublesome for databases with more than 256 tables that trigger this case, and it required de-duplicating work items which had quadratic cost without major refactoring of the feature. So, patch attached. I'll add it to the next CF as well.
From 791b9d2006b5abd67a8efb3f7c6cc99141ddbb09 Mon Sep 17 00:00:00 2001 From: Claudio Freire <klaussfre...@gmail.com> Date: Fri, 28 Jul 2017 21:31:59 -0300 Subject: [PATCH] Vacuum: Update FSM more frequently Make Vacuum update the FSM more frequently, to avoid the case where autovacuum fails to reach the point where it updates the FSM in highly contended tables. Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids recursing into branches that already contain enough free space, to avoid having to traverse the whole FSM and thus induce quadratic costs. Intermediate FSM vacuums are only supposed to make enough free space visible to avoid extension until the final (non-partial) FSM vacuum. Run partial FSM vacuums after each index pass, which is a point at which whole ranges of the heap have been thorougly cleaned, and we can expect no further updates to those ranges of the FSM save for concurrent activity. When there are no indexes, and thus no index passes, run partial FSM vacuums every 8GB of dirtied pages or 1/8th of the relation, whichever is highest. This allows some partial work to be made visible without incurring quadratic cost. In any case, FSM are small in relation to the table, so even when quadratic cost is involved, it should not be problematic. Index passes already incur quadratic cost, and the addition of the FSM is unlikely to be measurable. Run a partial FSM vacuum with a low pruning threshold right at the beginning of the VACUUM to finish up any work left over from prior canceled vacuum runs, something that is common in highly contended tables when running only autovacuum. --- src/backend/access/brin/brin.c | 2 +- src/backend/access/brin/brin_pageops.c | 10 +++--- src/backend/commands/vacuumlazy.c | 60 +++++++++++++++++++++++++++++-- src/backend/storage/freespace/freespace.c | 31 ++++++++++++---- src/backend/storage/freespace/indexfsm.c | 2 +- src/include/storage/freespace.h | 2 +- 6 files changed, 90 insertions(+), 17 deletions(-) diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index efebeb0..bb80edd 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -1424,5 +1424,5 @@ brin_vacuum_scan(Relation idxrel, BufferAccessStrategy strategy) * the way to the top. */ if (vacuum_fsm) - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); } diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c index 80f803e..0af4e0a 100644 --- a/src/backend/access/brin/brin_pageops.c +++ b/src/backend/access/brin/brin_pageops.c @@ -130,7 +130,7 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange, brin_initialize_empty_new_buffer(idxrel, newbuf); UnlockReleaseBuffer(newbuf); if (extended) - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); } return false; } @@ -150,7 +150,7 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange, brin_initialize_empty_new_buffer(idxrel, newbuf); UnlockReleaseBuffer(newbuf); if (extended) - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); } return false; } @@ -205,7 +205,7 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange, LockBuffer(oldbuf, BUFFER_LOCK_UNLOCK); if (extended) - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); return true; } @@ -307,7 +307,7 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange, { Assert(BlockNumberIsValid(newblk)); RecordPageWithFreeSpace(idxrel, newblk, freespace); - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); } return true; @@ -451,7 +451,7 @@ brin_doinsert(Relation idxrel, BlockNumber pagesPerRange, LockBuffer(revmapbuf, BUFFER_LOCK_UNLOCK); if (extended) - FreeSpaceMapVacuum(idxrel); + FreeSpaceMapVacuum(idxrel, 0); return off; } diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index fabb2f8..d0e969c 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -250,6 +250,17 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, vacrelstats->pages_removed = 0; vacrelstats->lock_waiter_detected = false; + /* + * Vacuum the Free Space Map partially before we start. + * If an earlier vacuum was canceled, and that's likely in + * highly contended tables, we may need to finish up. If we do + * it now, we make the space visible to other backends regardless + * of whether we succeed in finishing this time around. + * Don't bother checking branches that already have usable space, + * though. + */ + FreeSpaceMapVacuum(onerel, 64); + /* Open all indexes of the relation */ vac_open_indexes(onerel, RowExclusiveLock, &nindexes, &Irel); vacrelstats->hasindex = (nindexes > 0); @@ -287,7 +298,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, PROGRESS_VACUUM_PHASE_FINAL_CLEANUP); /* Vacuum the Free Space Map */ - FreeSpaceMapVacuum(onerel); + FreeSpaceMapVacuum(onerel, 0); /* * Update statistics in pg_class. @@ -463,7 +474,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, HeapTupleData tuple; char *relname; BlockNumber empty_pages, - vacuumed_pages; + vacuumed_pages, + vacuumed_pages_at_fsm_vac, + vacuum_fsm_every_pages; double num_tuples, tups_vacuumed, nkeep, @@ -473,6 +486,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, PGRUsage ru0; Buffer vmbuffer = InvalidBuffer; BlockNumber next_unskippable_block; + Size max_freespace = 0; bool skipping_blocks; xl_heap_freeze_tuple *frozen; StringInfoData buf; @@ -491,7 +505,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, get_namespace_name(RelationGetNamespace(onerel)), relname))); - empty_pages = vacuumed_pages = 0; + empty_pages = vacuumed_pages = vacuumed_pages_at_fsm_vac = 0; num_tuples = tups_vacuumed = nkeep = nunused = 0; indstats = (IndexBulkDeleteResult **) @@ -504,6 +518,16 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vacrelstats->nonempty_pages = 0; vacrelstats->latestRemovedXid = InvalidTransactionId; + /* + * Vacuum the FSM a few times in the middle if the relation is big + * and has no indexes. Once every 8GB of dirtied pages, or one 8th + * of the relation, whatever is bigger, to avoid quadratic cost. + * If it has indexes, this is ignored, and the FSM is vacuumed after + * each index pass. + */ + vacuum_fsm_every_pages = nblocks / 8; + vacuum_fsm_every_pages = Max(vacuum_fsm_every_pages, 1048576); + lazy_space_alloc(vacrelstats, nblocks); frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage); @@ -743,6 +767,14 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vacrelstats->num_dead_tuples = 0; vacrelstats->num_index_scans++; + /* + * Vacuum the Free Space Map to make the changes we made visible. + * Don't recurse into branches with more than max_freespace, + * as we can't set it higher already. + */ + FreeSpaceMapVacuum(onerel, max_freespace); + max_freespace = 0; + /* Report that we are once again scanning the heap */ pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, PROGRESS_VACUUM_PHASE_SCAN_HEAP); @@ -865,6 +897,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, UnlockReleaseBuffer(buf); RecordPageWithFreeSpace(onerel, blkno, freespace); + if (freespace > max_freespace) + max_freespace = freespace; continue; } @@ -904,6 +938,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, UnlockReleaseBuffer(buf); RecordPageWithFreeSpace(onerel, blkno, freespace); + if (freespace > max_freespace) + max_freespace = freespace; continue; } @@ -1250,7 +1286,23 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * taken if there are no indexes.) */ if (vacrelstats->num_dead_tuples == prev_dead_count) + { RecordPageWithFreeSpace(onerel, blkno, freespace); + if (freespace > max_freespace) + max_freespace = freespace; + } + + /* + * If there are no indexes then we should periodically vacuum the FSM + * on huge relations to make free space visible early. + */ + if (nindexes == 0 && + (vacuumed_pages_at_fsm_vac - vacuumed_pages) > vacuum_fsm_every_pages) + { + /* Vacuum the Free Space Map */ + FreeSpaceMapVacuum(onerel, 0); + vacuumed_pages_at_fsm_vac = vacuumed_pages; + } } /* report that everything is scanned and vacuumed */ @@ -1900,6 +1952,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats) * We don't insert a vacuum delay point here, because we have an * exclusive lock on the table which we want to hold for as short a * time as possible. We still need to check for interrupts however. + * We might have to acquire the autovacuum lock, however, but that + * shouldn't pose a deadlock risk. */ CHECK_FOR_INTERRUPTS(); diff --git a/src/backend/storage/freespace/freespace.c b/src/backend/storage/freespace/freespace.c index 4648473..035035a 100644 --- a/src/backend/storage/freespace/freespace.c +++ b/src/backend/storage/freespace/freespace.c @@ -108,7 +108,7 @@ static Size fsm_space_cat_to_avail(uint8 cat); static int fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot, uint8 newValue, uint8 minValue); static BlockNumber fsm_search(Relation rel, uint8 min_cat); -static uint8 fsm_vacuum_page(Relation rel, FSMAddress addr, bool *eof); +static uint8 fsm_vacuum_page(Relation rel, FSMAddress addr, Size threshold, bool *eof); static BlockNumber fsm_get_lastblckno(Relation rel, FSMAddress addr); static void fsm_update_recursive(Relation rel, FSMAddress addr, uint8 new_cat); @@ -376,7 +376,7 @@ FreeSpaceMapTruncateRel(Relation rel, BlockNumber nblocks) * FreeSpaceMapVacuum - scan and fix any inconsistencies in the FSM */ void -FreeSpaceMapVacuum(Relation rel) +FreeSpaceMapVacuum(Relation rel, Size threshold) { bool dummy; @@ -384,7 +384,7 @@ FreeSpaceMapVacuum(Relation rel) * Traverse the tree in depth-first order. The tree is stored physically * in depth-first order, so this should be pretty I/O efficient. */ - fsm_vacuum_page(rel, FSM_ROOT_ADDRESS, &dummy); + fsm_vacuum_page(rel, FSM_ROOT_ADDRESS, threshold, &dummy); } /******** Internal routines ********/ @@ -663,6 +663,8 @@ fsm_extend(Relation rel, BlockNumber fsm_nblocks) * If minValue > 0, the updated page is also searched for a page with at * least minValue of free space. If one is found, its slot number is * returned, -1 otherwise. + * + * If minValue == 0, the value at the root node is returned. */ static int fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot, @@ -687,6 +689,10 @@ fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot, addr.level == FSM_BOTTOM_LEVEL, true); } + else + { + newslot = fsm_get_avail(page, 0); + } UnlockReleaseBuffer(buf); @@ -785,7 +791,7 @@ fsm_search(Relation rel, uint8 min_cat) * Recursive guts of FreeSpaceMapVacuum */ static uint8 -fsm_vacuum_page(Relation rel, FSMAddress addr, bool *eof_p) +fsm_vacuum_page(Relation rel, FSMAddress addr, Size threshold, bool *eof_p) { Buffer buf; Page page; @@ -816,11 +822,19 @@ fsm_vacuum_page(Relation rel, FSMAddress addr, bool *eof_p) { int child_avail; + /* Tree pruning for partial vacuums */ + if (threshold) + { + child_avail = fsm_get_avail(page, slot); + if (child_avail >= threshold) + continue; + } + CHECK_FOR_INTERRUPTS(); /* After we hit end-of-file, just clear the rest of the slots */ if (!eof) - child_avail = fsm_vacuum_page(rel, fsm_get_child(addr, slot), &eof); + child_avail = fsm_vacuum_page(rel, fsm_get_child(addr, slot), threshold, &eof); else child_avail = 0; @@ -884,6 +898,11 @@ fsm_update_recursive(Relation rel, FSMAddress addr, uint8 new_cat) * information in that. */ parent = fsm_get_parent(addr, &parentslot); - fsm_set_and_search(rel, parent, parentslot, new_cat, 0); + new_cat = fsm_set_and_search(rel, parent, parentslot, new_cat, 0); + + /* + * Bubble up, not the value we just set, but the one now in the root + * node of the just-updated page, which is the page's highest value. + */ fsm_update_recursive(rel, parent, new_cat); } diff --git a/src/backend/storage/freespace/indexfsm.c b/src/backend/storage/freespace/indexfsm.c index 5cfbd4c..6ffd268 100644 --- a/src/backend/storage/freespace/indexfsm.c +++ b/src/backend/storage/freespace/indexfsm.c @@ -70,5 +70,5 @@ RecordUsedIndexPage(Relation rel, BlockNumber usedBlock) void IndexFreeSpaceMapVacuum(Relation rel) { - FreeSpaceMapVacuum(rel); + FreeSpaceMapVacuum(rel, 0); } diff --git a/src/include/storage/freespace.h b/src/include/storage/freespace.h index d110f00..79db370 100644 --- a/src/include/storage/freespace.h +++ b/src/include/storage/freespace.h @@ -31,7 +31,7 @@ extern void XLogRecordPageWithFreeSpace(RelFileNode rnode, BlockNumber heapBlk, Size spaceAvail); extern void FreeSpaceMapTruncateRel(Relation rel, BlockNumber nblocks); -extern void FreeSpaceMapVacuum(Relation rel); +extern void FreeSpaceMapVacuum(Relation rel, Size threshold); extern void UpdateFreeSpaceMap(Relation rel, BlockNumber startBlkNum, BlockNumber endBlkNum, -- 1.8.4.5
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers