Hi,
On Thu, Jan 15, 2026 at 9:13 AM Masahiko Sawada <[email protected]> wrote:
>
> Thank you for updating the patches! Here are review comments.
>
Thank you for the review!
>
> +static void
> +autovacuum_worker_before_shmem_exit(int code, Datum arg)
> +{
> + if (code != 0)
> + AutoVacuumReleaseAllParallelWorkers();
> +
> + Assert(av_nworkers_reserved == 0);
> +}
>
> While adding the assertion here makes sense, the assertion won't work
> in non-assertion builds. I guess it's safer to call
> AutoVacuumReleaseAllParallelWorkers() regardless of the code to ensure
> that no autovacuum workers exit while holding parallel workers.
>
OK, I agree.
> ---
> + before_shmem_exit(autovacuum_worker_before_shmem_exit, 0);
>
> I think it would be better to set this callback later like before the
> main loop of processing the tables as it makes no sense even if we set
> it very early.
Yeah, agree. I'll also add a comment for it, because we already have a
"ReleaseAllParallelWorkers" function call in the try/catch block below.
>
> ---
> + /*
> + * Cap the number of free workers by new parameter's value, if needed.
> + */
> + AutoVacuumShmem->av_freeParallelWorkers =
> + Min(AutoVacuumShmem->av_freeParallelWorkers,
> + autovacuum_max_parallel_workers);
> +
> + if (autovacuum_max_parallel_workers > prev_max_parallel_workers)
> + {
> + /*
> + * If user wants to increase number of parallel autovacuum workers, we
> + * must increase number of free workers.
> + */
> + AutoVacuumShmem->av_freeParallelWorkers +=
> + (autovacuum_max_parallel_workers - prev_max_parallel_workers);
> + }
>
> Suppose the previous autovacuum_max_parallel_workers is 5 and there
> are 2 workers are reserved (i.e., there are 3 free parallel workers),
> if the autovacuum_max_parallel_workers changes to 2, the new
> AutoVacuumShmem->av_freeParallelWorkers would be 2 based on the above
> codes, but I believe that the new number of free workers should be 0
> as there are already 2 workers are running. What do you think? I guess
> we can calculate the new number of free workers by:
>
> Max((autovacuum_max_parallel_workers - prev_max_parallel_workers) +
> AutoVacuumShmem->av_freeParallelWorkers), 0)
>
If av_max_parallel_workers was changed to 2, then we not only set
freeParallelWorkers to 2 but also set maxParallelWorkers to 2.
Thus, when previously reserved two workers are released, av leader will
encounter this code:
/*
* If the maximum number of parallel workers was reduced during execution,
* we must cap available workers number by its new value.
*/
AutoVacuumShmem->av_freeParallelWorkers =
Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
AutoVacuumShmem->av_maxParallelWorkers);
I.e. freeParallelWorkers will be left as "2".
The formula you suggested is also correct, but if you have no objections,
I would prefer not to change the existing logic. It seems more reliable for
me when av leader explicitly can consider such a situation.
> ---
> I've attached a patch proposing some minor changes.
>
Thanks! I agree with all fixes except a single one:
- * NOTE: We will try to provide as many workers as requested, even if caller
- * will occupy all available workers.
I think that this is a pretty important point. I'll leave this NOTE in the
v19 patch set. Do you mind?
>
> + /*
> + * Number of planned and actually launched parallel workers for all index
> + * scans, or NULL
> + */
> + PVWorkersUsage *workers_usage;
>
> I think that LVRelState can have PVWorkersUsage instead of a pointer to it.
>
Previously I used the NULL value of this pointer as a flag that we don't need
to log workers usage. Now I'll add boolean flag for this purpose (IIUC,
"nplanned > 0" condition is not enough to determine whether we should log
workers usage, because VACUUM PARALLEL can be called without VERBOSE).
> ---
> + /*
> + * Allocate space for workers usage statistics. Thus, we explicitly
> + * make clear that such statistics must be accumulated. For now, this
> + * is used only by autovacuum leader worker, because it must log it in
> + * the end of table processing.
> + */
> + vacrel->workers_usage = AmAutoVacuumWorkerProcess() ?
> + (PVWorkersUsage *) palloc0(sizeof(PVWorkersUsage)) :
> + NULL;
>
> I think we can report the worker statistics even in VACUUM VERBOSE
> logs. Currently VACUUM VERBOSE reports the worker usage just during
> index vacuuming but it would make sense to report the overall
> statistics in vacuum logs. It would help make VACUUM VERBOSE logs and
> autovacuum logs consistent.
>
Agree.
> But we don't need to report the worker usage if we didn't use the
> parallel vacuum (i.e., if npanned == 0).
>
As I wrote above - we don't need to log workers usage if the VERBOSE option
is not specified (even if nplanned > 0). Am I missing something?
> ---
> + /* Remember these values, if we asked to. */
> + if (wusage != NULL)
> + {
> + wusage->nlaunched += pvs->pcxt->nworkers_launched;
> + wusage->nplanned += nworkers;
> + }
>
> This code runs after the attempt to reserve parallel workers.
> Consequently, if we fail to reserve any workers due to
> autovacuum_max_parallel_workers, we report the status as if parallel
> vacuum wasn't planned at all. I think knowing the number of workers
> that were planned but not reserved would provide valuable insight for
> users tuning autovacuum_max_parallel_workers.
>
100% agree.
> ---
> + if (vacrel->workers_usage)
> + appendStringInfo(&buf,
> + _("parallel index vacuum/cleanup :
> workers planned = %d, workers launched = %d\n"),
> + vacrel->workers_usage->nplanned,
> + vacrel->workers_usage->nlaunched);
>
> Since these numbers are the total number of workers planned and
> launched, how about changing it to something "parallel index
> vacuum/cleanup: %d workers were planned and %d workers were launched
> in total"?
>
Agree.
>
> +typedef enum AVLeaderFaulureType
> +{
> + FAIL_NONE,
> + FAIL_ERROR,
> + FAIL_FATAL,
> +} AVLeaderFaulureType;
>
> I'm concerned that it is somewhat overwrapped with what injection
> points does as we can set 'error' to injection_points_attach(). For
> the FATAL error, we can terminate the autovacuum worker by using
> pg_terminate_backend() that keeps waiting due to
> injection_point_attach() with action='wait'.
>
Oh, I didn't know about the possibility of testing FATAL errors with
pg_terminate_backend. After reading your letter I found this pattern
in signal_autovacuum.pl. This is beautiful.
Thank you, I'll rework these tests.
> ---
> + /*
> + * Injection point to help exercising number of available parallel
> + * autovacuum workers.
> + */
> + INJECTION_POINT("autovacuum-set-free-parallel-workers-num",
> + &AutoVacuumShmem->av_freeParallelWorkers);
>
> This injection point is added to two places. IIUC the purpose of this
> function is to update the free_parallel_workers of InjPointState. And
> that value is taken by get_parallel_autovacuum_free_workers() SQL
> function during the TAP test. I guess it's better to have
> get_parallel_autovacuum_free_workers() function to direcly check
> av_freeParallelWorkers with a proper locking.
>
Agree.
> ---
> It would be great if we could test the av_freeParallelWorkers
> adjustment when max_parallel_maintenance_workers changes.
>
You mean "when autovacuum_max_parallel_workers changes"?
I'll add a test for it.
>
> * 0005 patch
>
> +typedef struct PVSharedCostParams
> +{
> + slock_t spinlock; /* protects all fields below */
> +
> + /* Copies of corresponding parameters from autovacuum leader process */
> + double cost_delay;
> + int cost_limit;
> +} PVSharedCostParams;
>
> Since Parallel workers don't reload the config file I think other
> vacuum delay related parameters such as VacuumCostPage{Miss|Hit|Dirty}
> also needs to be shared by the leader.
>
Yes, I remember it. I didn't add them in the previous patch because it was
experimental. I'll add all appropriate parameters in v19.
> ---
> + if (!AmAutoVacuumWorkerProcess())
> + {
> + /*
> + * If we are autovacuum parallel worker, check whether cost-based
> + * parameters had changed in leader worker.
> + * If so, vacuum_cost_delay and vacuum_cost_limit will be set to the
> + * values which leader worker is operating on.
> + *
> + * Do it before checking VacuumCostActive, because its value might be
> + * changed after leader's parameters consumption.
> + */
> + parallel_vacuum_fix_cost_based_params();
> + }
>
> We need to add checks to prevent the normal backend running the VACUUM
> command from calling parallel_vacuum_fix_cost_based_params().
>
We already have such check inside the "fix_cost_based" function :
/* Check whether we are running parallel autovacuum */
if (pv_shared_cost_params == NULL)
return false;
We also have this comment:
* If we are autovacuum parallel worker, check whether cost-based
* parameters had changed in leader worker.
As an alternative, I'll add comment explicitly saying that process will
immediately return if it not parallel autovacuum participant.
> IIUC autovacuum parallel workers would call
> parallel_vacuum_fix_cost_based_params() and update their
> vacuum_cost_{delay|limit} every vacuum_delay_point().
>
Yep.
> ---
> +/*
> + * Function to be called from parallel autovacuum worker in order to sync
> + * some cost-based delay parameter with the leader worker.
> + */
> +bool
> +parallel_vacuum_fix_cost_based_params(void)
> +{
>
> The 'fix' doesn't sound right to me as it's not broken actually. How
> about something like parallel_vacuum_update_shared_delay_params?
>
Agree.
> + Assert(IsParallelWorker() && !AmAutoVacuumWorkerProcess());
> +
> + SpinLockAcquire(&pv_shared_cost_params->spinlock);
> +
> + vacuum_cost_delay = pv_shared_cost_params->cost_delay;
> + vacuum_cost_limit = pv_shared_cost_params->cost_limit;
> +
> + SpinLockRelease(&pv_shared_cost_params->spinlock);
>
> IIUC autovacuum parallel workers seems to update their
> vacuum_cost_{delay|limit} every vacuum_delay_point(), which seems not
> good. Can we somehow avoid unnecessary updates?
More precisely, parallel worker *reads* leader's parameters every delay_point.
Obviously, this does not mean that the parameters will necessarily be updated.
But I don't see anything wrong with this logic. We just every time get the most
relevant parameters from the leader. Of course we can introduce some
signaling mechanism, but it will have the same effect as in the current code.
> ---
> +
> + if (vacuum_cost_delay > 0 && !VacuumFailsafeActive)
> + VacuumCostActive = true;
> +
>
> Should we consider the case of disabling VacuumCostActive as well?
>
I think that we should. I'll add VacuumUpdateCosts function call instead
of write this logic manually. IIUC, it will not break anything.
Again, thank you very much for the review!
Please, see v19 patches which including all above comments
and zengman's notice. Main changes :
1) Fixes for before_shmem_exit callback
2) Some comments reword + pgindent on all files
3) Workers usage can also be reported for VACUUM PARALLEL
4) Deeply reworked tests
5) Propagation (from leader to worker) of all cost-based delay parameters
I have also changed structure of the patch set - now test and documentation
are the last patches to be applied.
--
Best regards,
Daniil Davydov
From 96180a1b4a78c6202c95131afee2b75be8fcc534 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 02:32:44 +0700
Subject: [PATCH v19 5/5] Documentation for parallel autovacuum
---
doc/src/sgml/config.sgml | 17 +++++++++++++++++
doc/src/sgml/maintenance.sgml | 12 ++++++++++++
doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
3 files changed, 49 insertions(+)
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0fad34da6eb..c64897f4707 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2849,6 +2849,7 @@ include_dir 'conf.d'
<para>
When changing this value, consider also adjusting
<xref linkend="guc-max-parallel-workers"/>,
+ <xref linkend="guc-autovacuum-max-parallel-workers"/>,
<xref linkend="guc-max-parallel-maintenance-workers"/>, and
<xref linkend="guc-max-parallel-workers-per-gather"/>.
</para>
@@ -9284,6 +9285,22 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</listitem>
</varlistentry>
+ <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+ <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+ <secondary>configuration parameter</secondary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Sets the maximum number of parallel autovacuum workers that
+ can be used for parallel index vacuuming at one time. Is capped by
+ <xref linkend="guc-max-worker-processes"/>. The default is 2.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect2>
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..c9f9163c551 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT: Execute a database-wide VACUUM in that database.
autovacuum workers' activity.
</para>
+ <para>
+ If an autovacuum worker process comes across a table with the enabled
+ <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+ it will launch parallel workers in order to vacuum indexes of this table
+ in a parallel mode. Parallel workers are taken from the pool of processes
+ established by <xref linkend="guc-max-worker-processes"/>, limited by
+ <xref linkend="guc-max-parallel-workers"/>.
+ The total number of parallel autovacuum workers that can be active at one
+ time is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+ configuration parameter.
+ </para>
+
<para>
If several large tables all become eligible for vacuuming in a short
amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 77c5a763d45..3592c9acff9 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1717,6 +1717,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+ <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Sets the maximum number of parallel autovacuum workers that can process
+ indexes of this table.
+ The default value is -1, which means no parallel index vacuuming for
+ this table. If value is 0 then parallel degree will computed based on
+ number of indexes.
+ Note that the computed number of workers may not actually be available at
+ run time. If this occurs, the autovacuum will run with fewer workers
+ than expected.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
<term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
<indexterm>
--
2.43.0
From ba30e217073cee821cd842f12ed91e4c10adb255 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v19 3/5] Cost based parameters propagation for parallel
autovacuum
---
src/backend/commands/vacuum.c | 29 +++++++-
src/backend/commands/vacuumparallel.c | 99 +++++++++++++++++++++++++++
src/backend/postmaster/autovacuum.c | 2 +-
src/include/commands/vacuum.h | 2 +
4 files changed, 129 insertions(+), 3 deletions(-)
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index aa4fbec143f..4622107734f 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2430,8 +2430,27 @@ vacuum_delay_point(bool is_analyze)
/* Always check for interrupts */
CHECK_FOR_INTERRUPTS();
- if (InterruptPending ||
- (!VacuumCostActive && !ConfigReloadPending))
+ if (InterruptPending)
+ return;
+
+ if (!AmAutoVacuumWorkerProcess())
+ {
+ /*
+ * If we are parallel *autovacuum* worker, check whether related to
+ * cost-based delay parameters had changed in the leader worker. If
+ * so, corresponding parameters will be updated to the values which
+ * leader worker is operating on.
+ *
+ * Do it before checking VacuumCostActive, because its value might be
+ * changed after leader's parameters consumption.
+ *
+ * Note, that this function has no effect if we are non-autovacuum
+ * parallel worker.
+ */
+ parallel_vacuum_update_shared_delay_params();
+ }
+
+ if (!VacuumCostActive && !ConfigReloadPending)
return;
/*
@@ -2445,6 +2464,12 @@ vacuum_delay_point(bool is_analyze)
ConfigReloadPending = false;
ProcessConfigFile(PGC_SIGHUP);
VacuumUpdateCosts();
+
+ /*
+ * If we are parallel autovacuum leader and some of cost-based
+ * parameters had changed, let other parallel workers know.
+ */
+ parallel_vacuum_propagate_cost_based_params();
}
/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index c32314f9731..71449630b63 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -53,6 +53,25 @@
#define PARALLEL_VACUUM_KEY_WAL_USAGE 4
#define PARALLEL_VACUUM_KEY_INDEX_STATS 5
+/*
+ * Only autovacuum leader can reload config file. We use this structure in
+ * parallel autovacuum for keeping worker's parameters in sync with leader's
+ * parameters.
+ */
+typedef struct PVSharedCostParams
+{
+ slock_t spinlock; /* protects all fields below */
+
+ /* Copies of corresponding parameters from autovacuum leader process */
+ double cost_delay;
+ int cost_limit;
+ int cost_page_dirty;
+ int cost_page_hit;
+ int cost_page_miss;
+} PVSharedCostParams;
+
+static PVSharedCostParams * pv_shared_cost_params = NULL;
+
/*
* Shared information among parallel workers. So this is allocated in the DSM
* segment.
@@ -122,6 +141,18 @@ typedef struct PVShared
/* Statistics of shared dead items */
VacDeadItemsInfo dead_items_info;
+
+ /*
+ * If 'true' then we are running parallel autovacuum. Otherwise, we are
+ * running parallel maintenence VACUUM.
+ */
+ bool am_parallel_autovacuum;
+
+ /*
+ * Struct for syncing parameters between supportive parallel autovacuum
+ * workers with leader worker.
+ */
+ PVSharedCostParams cost_params;
} PVShared;
/* Status used during parallel index vacuum or cleanup */
@@ -395,6 +426,19 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
pg_atomic_init_u32(&(shared->active_nworkers), 0);
pg_atomic_init_u32(&(shared->idx), 0);
+ shared->am_parallel_autovacuum = AmAutoVacuumWorkerProcess();
+
+ if (shared->am_parallel_autovacuum)
+ {
+ shared->cost_params.cost_delay = vacuum_cost_delay;
+ shared->cost_params.cost_limit = vacuum_cost_limit;
+ shared->cost_params.cost_page_dirty = VacuumCostPageDirty;
+ shared->cost_params.cost_page_hit = VacuumCostPageHit;
+ shared->cost_params.cost_page_miss = VacuumCostPageMiss;
+ SpinLockInit(&shared->cost_params.spinlock);
+ pv_shared_cost_params = &(shared->cost_params);
+ }
+
shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
pvs->shared = shared;
@@ -537,6 +581,58 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wusage);
}
+/*
+ * Function to be called from parallel autovacuum worker in order to sync
+ * some cost-based delay parameter with the leader worker.
+ */
+bool
+parallel_vacuum_update_shared_delay_params(void)
+{
+ /* Check whether we are running parallel autovacuum */
+ if (pv_shared_cost_params == NULL)
+ return false;
+
+ Assert(IsParallelWorker() && !AmAutoVacuumWorkerProcess());
+
+ SpinLockAcquire(&pv_shared_cost_params->spinlock);
+
+ VacuumCostDelay = pv_shared_cost_params->cost_delay;
+ VacuumCostLimit = pv_shared_cost_params->cost_limit;
+ VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+ VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+ VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+
+ SpinLockRelease(&pv_shared_cost_params->spinlock);
+
+ VacuumUpdateCosts();
+
+ return true;
+}
+
+/*
+ * Function to be called from parallel autovacuum leader in order to propagate
+ * some cost-based parameters to the supportive workers.
+ */
+void
+parallel_vacuum_propagate_cost_based_params(void)
+{
+ /* Check whether we are running parallel autovacuum */
+ if (pv_shared_cost_params == NULL)
+ return;
+
+ Assert(AmAutoVacuumWorkerProcess());
+
+ SpinLockAcquire(&pv_shared_cost_params->spinlock);
+
+ pv_shared_cost_params->cost_delay = vacuum_cost_delay;
+ pv_shared_cost_params->cost_limit = vacuum_cost_limit;
+ pv_shared_cost_params->cost_page_dirty = VacuumCostPageDirty;
+ pv_shared_cost_params->cost_page_hit = VacuumCostPageHit;
+ pv_shared_cost_params->cost_page_miss = VacuumCostPageMiss;
+
+ SpinLockRelease(&pv_shared_cost_params->spinlock);
+}
+
/*
* Compute the number of parallel worker processes to request. Both index
* vacuum and index cleanup can be executed with parallel workers.
@@ -1094,6 +1190,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
VacuumSharedCostBalance = &(shared->cost_balance);
VacuumActiveNWorkers = &(shared->active_nworkers);
+ if (shared->am_parallel_autovacuum)
+ pv_shared_cost_params = &(shared->cost_params);
+
/* Set parallel vacuum state */
pvs.indrels = indrels;
pvs.nindexes = nindexes;
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 097b1dd55cf..98965fd8e2d 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1693,7 +1693,7 @@ VacuumUpdateCosts(void)
}
else
{
- /* Must be explicit VACUUM or ANALYZE */
+ /* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
vacuum_cost_delay = VacuumCostDelay;
vacuum_cost_limit = VacuumCostLimit;
}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index ec5d70aacdc..09696a8eafe 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -411,6 +411,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
int num_index_scans,
bool estimated_count,
PVWorkersUsage *wusage);
+extern bool parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_cost_based_params(void);
extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
/* in commands/analyze.c */
--
2.43.0
From c961c20ba124ad925711df79a82ef4026b920724 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:03:24 +0700
Subject: [PATCH v19 1/5] Parallel autovacuum
---
src/backend/access/common/reloptions.c | 11 ++
src/backend/commands/vacuumparallel.c | 39 ++++-
src/backend/postmaster/autovacuum.c | 159 +++++++++++++++++-
src/backend/utils/init/globals.c | 1 +
src/backend/utils/misc/guc.c | 8 +-
src/backend/utils/misc/guc_parameters.dat | 9 +
src/backend/utils/misc/postgresql.conf.sample | 2 +
src/bin/psql/tab-complete.in.c | 1 +
src/include/miscadmin.h | 1 +
src/include/postmaster/autovacuum.h | 4 +
src/include/utils/rel.h | 7 +
11 files changed, 232 insertions(+), 10 deletions(-)
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 0b83f98ed5f..692ac46733e 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -222,6 +222,15 @@ static relopt_int intRelOpts[] =
},
SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
},
+ {
+ {
+ "autovacuum_parallel_workers",
+ "Maximum number of parallel autovacuum workers that can be used for processing this table.",
+ RELOPT_KIND_HEAP,
+ ShareUpdateExclusiveLock
+ },
+ -1, -1, 1024
+ },
{
{
"autovacuum_vacuum_threshold",
@@ -1881,6 +1890,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
{"autovacuum_enabled", RELOPT_TYPE_BOOL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+ {"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+ offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index c3b3c9ea21a..cb42d4e572f 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
/*-------------------------------------------------------------------------
*
* vacuumparallel.c
- * Support routines for parallel vacuum execution.
+ * Support routines for parallel vacuum and autovacuum execution. In the
+ * comments below, the word "vacuum" will refer to both vacuum and
+ * autovacuum.
*
* This file contains routines that are intended to support setting up, using,
* and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
#include "executor/instrument.h"
#include "optimizer/paths.h"
#include "pgstat.h"
+#include "postmaster/autovacuum.h"
#include "storage/bufmgr.h"
#include "tcop/tcopprot.h"
#include "utils/lsyscache.h"
@@ -373,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
shared->queryid = pgstat_get_my_query_id();
shared->maintenance_work_mem_worker =
(nindexes_mwm > 0) ?
- maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
- maintenance_work_mem;
+ vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+ vac_work_mem;
+
shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
/* Prepare DSA space for dead items */
@@ -553,12 +557,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
int nindexes_parallel_bulkdel = 0;
int nindexes_parallel_cleanup = 0;
int parallel_workers;
+ int max_workers;
+
+ max_workers = AmAutoVacuumWorkerProcess() ?
+ autovacuum_max_parallel_workers :
+ max_parallel_maintenance_workers;
/*
* We don't allow performing parallel operation in standalone backend or
* when parallelism is disabled.
*/
- if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+ if (!IsUnderPostmaster || max_workers == 0)
return 0;
/*
@@ -597,8 +606,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
parallel_workers = (nrequested > 0) ?
Min(nrequested, nindexes_parallel) : nindexes_parallel;
- /* Cap by max_parallel_maintenance_workers */
- parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+ /* Cap by GUC variable */
+ parallel_workers = Min(parallel_workers, max_workers);
return parallel_workers;
}
@@ -646,6 +655,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
*/
nworkers = Min(nworkers, pvs->pcxt->nworkers);
+ /*
+ * Reserve workers in autovacuum global state. Note that we may be given
+ * fewer workers than we requested.
+ */
+ if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+ AutoVacuumReserveParallelWorkers(&nworkers);
+
/*
* Set index vacuum status and mark whether parallel vacuum worker can
* process it.
@@ -690,6 +706,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
LaunchParallelWorkers(pvs->pcxt);
+ /*
+ * Tell autovacuum that we could not launch all the previously
+ * reserved workers.
+ */
+ if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched < nworkers)
+ AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+
if (pvs->pcxt->nworkers_launched > 0)
{
/*
@@ -738,6 +761,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+ /* Release all the reserved parallel workers for autovacuum */
+ if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched > 0)
+ AutoVacuumReleaseParallelWorkers(pvs->pcxt->nworkers_launched);
}
/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 22379de1e31..097b1dd55cf 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -151,6 +151,13 @@ int Log_autoanalyze_min_duration = 600000;
static double av_storage_param_cost_delay = -1;
static int av_storage_param_cost_limit = -1;
+/*
+ * Tracks the number of parallel workers currently reserved by the
+ * autovacuum worker. This is non-zero only for the parallel autovacuum
+ * leader process.
+ */
+static int av_nworkers_reserved = 0;
+
/* Flags set by signal handlers */
static volatile sig_atomic_t got_SIGUSR2 = false;
@@ -285,6 +292,8 @@ typedef struct AutoVacuumWorkItem
* av_workItems work item array
* av_nworkersForBalance the number of autovacuum workers to use when
* calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
+ * av_maxParallelWorkers the maximum number of parallel autovacuum workers
*
* This struct is protected by AutovacuumLock, except for av_signal and parts
* of the worker list (see above).
@@ -299,6 +308,8 @@ typedef struct
WorkerInfo av_startingWorker;
AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
pg_atomic_uint32 av_nworkersForBalance;
+ uint32 av_freeParallelWorkers;
+ uint32 av_maxParallelWorkers;
} AutoVacuumShmemStruct;
static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -361,6 +372,8 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
static void avl_sigusr2_handler(SIGNAL_ARGS);
static bool av_worker_available(void);
static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
+static void AutoVacuumReleaseAllParallelWorkers(void);
@@ -760,6 +773,8 @@ ProcessAutoVacLauncherInterrupts(void)
if (ConfigReloadPending)
{
int autovacuum_max_workers_prev = autovacuum_max_workers;
+ int autovacuum_max_parallel_workers_prev =
+ autovacuum_max_parallel_workers;
ConfigReloadPending = false;
ProcessConfigFile(PGC_SIGHUP);
@@ -776,6 +791,15 @@ ProcessAutoVacLauncherInterrupts(void)
if (autovacuum_max_workers_prev != autovacuum_max_workers)
check_av_worker_gucs();
+ /*
+ * If autovacuum_max_parallel_workers changed, we must take care of
+ * the correct value of available parallel autovacuum workers in
+ * shmem.
+ */
+ if (autovacuum_max_parallel_workers_prev !=
+ autovacuum_max_parallel_workers)
+ adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
/* rebuild the list in case the naptime changed */
rebuild_database_list(InvalidOid);
}
@@ -1380,6 +1404,16 @@ avl_sigusr2_handler(SIGNAL_ARGS)
* AUTOVACUUM WORKER CODE
********************************************************************/
+/*
+ * Make sure that all reserved workers are released, even if parallel
+ * autovacuum leader is finishing due to FATAL error.
+ */
+static void
+autovacuum_worker_before_shmem_exit(int code, Datum arg)
+{
+ AutoVacuumReleaseAllParallelWorkers();
+}
+
/*
* Main entry point for autovacuum worker processes.
*/
@@ -2277,6 +2311,12 @@ do_autovacuum(void)
"Autovacuum Portal",
ALLOCSET_DEFAULT_SIZES);
+ /*
+ * Parallel autovacuum can reserve parallel workers. Make sure that all
+ * reserved workers are released even after FATAL error.
+ */
+ before_shmem_exit(autovacuum_worker_before_shmem_exit, 0);
+
/*
* Perform operations on collected tables.
*/
@@ -2458,6 +2498,12 @@ do_autovacuum(void)
}
PG_CATCH();
{
+ /*
+ * Parallel autovacuum can reserve parallel workers. Make sure
+ * that all reserved workers are released.
+ */
+ AutoVacuumReleaseAllParallelWorkers();
+
/*
* Abort the transaction, start a new one, and proceed with the
* next table in our list.
@@ -2858,8 +2904,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
*/
tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
- /* As of now, we don't support parallel vacuum for autovacuum */
- tab->at_params.nworkers = -1;
+
+ /* Decide whether we need to process indexes of table in parallel. */
+ tab->at_params.nworkers = avopts
+ ? avopts->autovacuum_parallel_workers
+ : -1;
+
tab->at_params.freeze_min_age = freeze_min_age;
tab->at_params.freeze_table_age = freeze_table_age;
tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3336,6 +3386,76 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
return result;
}
+/*
+ * Reserves parallel workers for autovacuum.
+ *
+ * nworkers is an in/out parameter; the requested number of parallel workers
+ * to reserve by the caller, and set to the actual number of reserved workers.
+ */
+void
+AutoVacuumReserveParallelWorkers(int *nworkers)
+{
+ /* Only leader autovacuum worker can call this function. */
+ Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+ /* The worker must not have any reserved workers yet */
+ Assert(av_nworkers_reserved == 0);
+
+ LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+ /* Provide as many workers as we can. */
+ *nworkers = Min(AutoVacuumShmem->av_freeParallelWorkers, *nworkers);
+ AutoVacuumShmem->av_freeParallelWorkers -= *nworkers;
+
+ /* Remember how many workers we have reserved. */
+ av_nworkers_reserved = *nworkers;
+
+ LWLockRelease(AutovacuumLock);
+}
+
+/*
+ * Leader autovacuum process must call this function in order to update global
+ * autovacuum state, so other leaders will be able to use these parallel
+ * workers.
+ *
+ * 'nworkers' - how many workers caller wants to release.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+ /* Only leader worker can call this function. */
+ Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+ LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+ /*
+ * If the maximum number of parallel workers was reduced during execution,
+ * we must cap available workers number by its new value.
+ */
+ AutoVacuumShmem->av_freeParallelWorkers =
+ Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+ AutoVacuumShmem->av_maxParallelWorkers);
+
+ /* Don't have to remember these workers anymore. */
+ av_nworkers_reserved -= nworkers;
+
+ LWLockRelease(AutovacuumLock);
+}
+
+/*
+ * Same as above, but release *all* parallel workers, that were reserved by
+ * current leader autovacuum process.
+ */
+static void
+AutoVacuumReleaseAllParallelWorkers(void)
+{
+ /* Only leader worker can call this function. */
+ Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+ if (av_nworkers_reserved > 0)
+ AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
+}
+
/*
* autovac_init
* This is called at postmaster initialization.
@@ -3396,6 +3516,10 @@ AutoVacuumShmemInit(void)
Assert(!found);
AutoVacuumShmem->av_launcherpid = 0;
+ AutoVacuumShmem->av_maxParallelWorkers =
+ Min(autovacuum_max_parallel_workers, max_worker_processes);
+ AutoVacuumShmem->av_freeParallelWorkers =
+ AutoVacuumShmem->av_maxParallelWorkers;
dclist_init(&AutoVacuumShmem->av_freeWorkers);
dlist_init(&AutoVacuumShmem->av_runningWorkers);
AutoVacuumShmem->av_startingWorker = NULL;
@@ -3477,3 +3601,34 @@ check_av_worker_gucs(void)
errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
autovacuum_worker_slots)));
}
+
+/*
+ * Adjusts the number of free parallel workers corresponds to the new
+ * autovacuum_max_parallel_workers value.
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+ LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+ /*
+ * Cap the number of free workers by new parameter's value, if needed.
+ */
+ AutoVacuumShmem->av_freeParallelWorkers =
+ Min(AutoVacuumShmem->av_freeParallelWorkers,
+ autovacuum_max_parallel_workers);
+
+ if (autovacuum_max_parallel_workers > prev_max_parallel_workers)
+ {
+ /*
+ * If user wants to increase number of parallel autovacuum workers, we
+ * must increase number of free workers.
+ */
+ AutoVacuumShmem->av_freeParallelWorkers +=
+ (autovacuum_max_parallel_workers - prev_max_parallel_workers);
+ }
+
+ AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
+
+ LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int NBuffers = 16384;
int MaxConnections = 100;
int max_worker_processes = 8;
int max_parallel_workers = 8;
+int autovacuum_max_parallel_workers = 2;
int MaxBackends = 0;
/* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index ae9d5f3fb70..c8a99a67767 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
*
* Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
*
- * Other changes might need to affect other workers, so forbid them.
+ * Other changes might need to affect other workers, so forbid them. Note,
+ * that parallel autovacuum leader is an exception, because only
+ * cost-based delays need to be affected also to parallel vacuum workers,
+ * and we will handle it elsewhere if appropriate.
*/
- if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+ if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+ action != GUC_ACTION_SAVE &&
(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
{
ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 7c60b125564..e933f5048f7 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,15 @@
max => '2000000000',
},
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+ short_desc => 'Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool.',
+ long_desc => 'This parameter is capped by "max_worker_processes" (not by "autovacuum_max_workers"!).',
+ variable => 'autovacuum_max_parallel_workers',
+ boot_val => '2',
+ min => '0',
+ max => 'MAX_BACKENDS',
+},
+
{ name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index dc9e2255f8a..86c67b790b0 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -691,6 +691,8 @@
#autovacuum_worker_slots = 16 # autovacuum worker slots to allocate
# (change requires restart)
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2 # disabled by default and limited by
+ # max_worker_processes
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b91bc00062..ed59a21289c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1423,6 +1423,7 @@ static const char *const table_storage_parameters[] = {
"autovacuum_multixact_freeze_max_age",
"autovacuum_multixact_freeze_min_age",
"autovacuum_multixact_freeze_table_age",
+ "autovacuum_parallel_workers",
"autovacuum_vacuum_cost_delay",
"autovacuum_vacuum_cost_limit",
"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index db559b39c4d..ad6e19f426c 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
extern PGDLLIMPORT int MaxConnections;
extern PGDLLIMPORT int max_worker_processes;
extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
extern PGDLLIMPORT int commit_timestamp_buffers;
extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 5aa0f3a8ac1..3f5b59a15bd 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -62,6 +62,10 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
Oid relationId, BlockNumber blkno);
+/* parallel autovacuum stuff */
+extern void AutoVacuumReserveParallelWorkers(int *nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+
/* shared memory stuff */
extern Size AutoVacuumShmemSize(void);
extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index d03ab247788..c1d882659f9 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,13 @@ typedef struct ForeignKeyCacheInfo
typedef struct AutoVacOpts
{
bool enabled;
+
+ /*
+ * Max number of parallel autovacuum workers. If value is 0 then parallel
+ * degree will computed based on number of indexes.
+ */
+ int autovacuum_parallel_workers;
+
int vacuum_threshold;
int vacuum_max_threshold;
int vacuum_ins_threshold;
--
2.43.0
From 895131049b2416566781ff585a577bc0342f5f71 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:07:47 +0700
Subject: [PATCH v19 2/5] Logging for parallel autovacuum
---
src/backend/access/heap/vacuumlazy.c | 27 +++++++++++++++++++++++++--
src/backend/commands/vacuumparallel.c | 21 +++++++++++++++------
src/include/commands/vacuum.h | 16 ++++++++++++++--
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 55 insertions(+), 10 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 1fcb212ab3d..0be33cb84a6 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -347,6 +347,12 @@ typedef struct LVRelState
int num_index_scans;
int num_dead_items_resets;
Size total_dead_items_bytes;
+
+ /*
+ * Total number of planned and actually launched parallel workers for
+ * index scans.
+ */
+ PVWorkersUsage workers_usage;
/* Counters that follow are only for scanned_pages */
int64 tuples_deleted; /* # deleted from table */
int64 tuples_frozen; /* # newly frozen */
@@ -630,6 +636,7 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
LVRelState *vacrel;
bool verbose,
instrument,
+ log_workers_usage = false, /* for parallel [auto]vacuum only */
skipwithvm,
frozenxid_updated,
minmulti_updated;
@@ -709,6 +716,12 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
indnames = palloc_array(char *, vacrel->nindexes);
for (int i = 0; i < vacrel->nindexes; i++)
indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i]));
+
+ /*
+ * Worker usage statistics must be accumulated for parallel autovacuum
+ * and for VACUUM (PARALLEL, VERBOSE).
+ */
+ log_workers_usage = (params.nworkers > -1);
}
/*
@@ -781,6 +794,9 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
vacrel->vm_new_visible_frozen_pages = 0;
vacrel->vm_new_frozen_pages = 0;
+ vacrel->workers_usage.nlaunched = 0;
+ vacrel->workers_usage.nplanned = 0;
+
/*
* Get cutoffs that determine which deleted tuples are considered DEAD,
* not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze. Then determine
@@ -1123,6 +1139,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
orig_rel_pages == 0 ? 100.0 :
100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
vacrel->lpdead_items);
+ if (log_workers_usage)
+ appendStringInfo(&buf,
+ _("parallel index vacuum/cleanup: %d workers were planned and %d workers were launched in total\n"),
+ vacrel->workers_usage.nplanned,
+ vacrel->workers_usage.nlaunched);
for (int i = 0; i < vacrel->nindexes; i++)
{
IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2698,7 +2719,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
{
/* Outsource everything to parallel variant */
parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
- vacrel->num_index_scans);
+ vacrel->num_index_scans,
+ &vacrel->workers_usage);
/*
* Do a postcheck to consider applying wraparound failsafe now. Note
@@ -3131,7 +3153,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
/* Outsource everything to parallel variant */
parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
vacrel->num_index_scans,
- estimated_count);
+ estimated_count,
+ &vacrel->workers_usage);
}
/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index cb42d4e572f..c32314f9731 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -227,7 +227,7 @@ struct ParallelVacuumState
static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
bool *will_parallel_vacuum);
static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
- bool vacuum);
+ bool vacuum, PVWorkersUsage *wusage);
static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -502,7 +502,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
*/
void
parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
- int num_index_scans)
+ int num_index_scans, PVWorkersUsage *wusage)
{
Assert(!IsParallelWorker());
@@ -513,7 +513,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
pvs->shared->reltuples = num_table_tuples;
pvs->shared->estimated_count = true;
- parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+ parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wusage);
}
/*
@@ -521,7 +521,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
*/
void
parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
- int num_index_scans, bool estimated_count)
+ int num_index_scans, bool estimated_count,
+ PVWorkersUsage *wusage)
{
Assert(!IsParallelWorker());
@@ -533,7 +534,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
pvs->shared->reltuples = num_table_tuples;
pvs->shared->estimated_count = estimated_count;
- parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+ parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wusage);
}
/*
@@ -618,7 +619,7 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
*/
static void
parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
- bool vacuum)
+ bool vacuum, PVWorkersUsage *wusage)
{
int nworkers;
PVIndVacStatus new_status;
@@ -655,6 +656,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
*/
nworkers = Min(nworkers, pvs->pcxt->nworkers);
+ /* Remember this value, if we asked to */
+ if (wusage != NULL && nworkers > 0)
+ wusage->nplanned += nworkers;
+
/*
* Reserve workers in autovacuum global state. Note that we may be given
* fewer workers than we requested.
@@ -725,6 +730,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
/* Enable shared cost balance for leader backend */
VacuumSharedCostBalance = &(pvs->shared->cost_balance);
VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+ /* Remember this value, if we asked to */
+ if (wusage != NULL)
+ wusage->nlaunched += pvs->pcxt->nworkers_launched;
}
if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..ec5d70aacdc 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,16 @@ typedef struct VacDeadItemsInfo
int64 num_items; /* current # of entries */
} VacDeadItemsInfo;
+/*
+ * PVWorkersUsage stores information about total number of launched and planned
+ * workers during parallel vacuum.
+ */
+typedef struct PVWorkersUsage
+{
+ int nlaunched;
+ int nplanned;
+} PVWorkersUsage;
+
/* GUC parameters */
extern PGDLLIMPORT int default_statistics_target; /* PGDLLIMPORT for PostGIS */
extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +404,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
long num_table_tuples,
- int num_index_scans);
+ int num_index_scans,
+ PVWorkersUsage *wusage);
extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
long num_table_tuples,
int num_index_scans,
- bool estimated_count);
+ bool estimated_count,
+ PVWorkersUsage *wusage);
extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
/* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3f3a888fd0e..afebde72235 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2404,6 +2404,7 @@ PullFilterOps
PushFilter
PushFilterOps
PushFunction
+PVWorkersUsage
PyCFunction
PyMethodDef
PyModuleDef
--
2.43.0
From cf06a408afe33373a802fb16507c91850244e638 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:08:14 +0700
Subject: [PATCH v19 4/5] Tests for parallel autovacuum
---
src/backend/commands/vacuumparallel.c | 29 ++
src/backend/postmaster/autovacuum.c | 17 +
src/include/postmaster/autovacuum.h | 1 +
src/test/modules/Makefile | 1 +
src/test/modules/meson.build | 1 +
src/test/modules/test_autovacuum/.gitignore | 2 +
src/test/modules/test_autovacuum/Makefile | 28 ++
src/test/modules/test_autovacuum/meson.build | 36 ++
.../modules/test_autovacuum/t/001_basic.pl | 325 ++++++++++++++++++
.../test_autovacuum/test_autovacuum--1.0.sql | 20 ++
.../modules/test_autovacuum/test_autovacuum.c | 166 +++++++++
.../test_autovacuum/test_autovacuum.control | 3 +
12 files changed, 629 insertions(+)
create mode 100644 src/test/modules/test_autovacuum/.gitignore
create mode 100644 src/test/modules/test_autovacuum/Makefile
create mode 100644 src/test/modules/test_autovacuum/meson.build
create mode 100644 src/test/modules/test_autovacuum/t/001_basic.pl
create mode 100644 src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.c
create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.control
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 71449630b63..1ead6e1193b 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -39,6 +39,7 @@
#include "postmaster/autovacuum.h"
#include "storage/bufmgr.h"
#include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
@@ -846,6 +847,14 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
pvs->pcxt->nworkers_launched, nworkers)));
}
+ /*
+ * To be able to exercise whether all reserved parallel workers are being
+ * released anyway, allow injection points to trigger a failure at this
+ * point.
+ */
+ if (nworkers > 0)
+ INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+
/* Vacuum the indexes that can be processed by only leader process */
parallel_vacuum_process_unsafe_indexes(pvs);
@@ -855,6 +864,15 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
*/
parallel_vacuum_process_safe_indexes(pvs);
+ /*
+ * To be able to exercise whether leader parallel autovacuum worker can
+ * propagate cost-based params to parallel workers, wait here until
+ * configuration is changed. I.e. tests are expecting, that during index
+ * processing vacuum_delay_point have been called (if config was changed).
+ */
+ if (nworkers > 0)
+ INJECTION_POINT("autovacuum-leader-after-indexes-processing", NULL);
+
/*
* Next, accumulate buffer and WAL usage. (This must wait for the workers
* to finish, or we might get incomplete data.)
@@ -1220,9 +1238,20 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
/* Prepare to track buffer usage during parallel execution */
InstrStartParallelQuery();
+ INJECTION_POINT("parallel-worker-before-indexes-processing", NULL);
+
/* Process indexes to perform vacuum/cleanup */
parallel_vacuum_process_safe_indexes(&pvs);
+ /*
+ * There is no guarantee that each parallel worker will necessarily
+ * process at least one index. Thus, at this point we cannot be sure that
+ * worker called vacuum_cost_delay. In order to test cost-based parameters
+ * propagation (from leader worker), call vacuum_delay_point here, if
+ * injection point is active.
+ */
+ INJECTION_POINT("parallel-autovacuum-force-delay-point", NULL);
+
/* Report buffer/WAL usage during parallel execution */
buffer_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_BUFFER_USAGE, false);
wal_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_WAL_USAGE, false);
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 98965fd8e2d..db99241df3e 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -3456,6 +3456,23 @@ AutoVacuumReleaseAllParallelWorkers(void)
AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
}
+/*
+ * Get number of free autovacuum parallel workers.
+ *
+ * For testing purpose only!
+ */
+uint32
+AutoVacuumGetFreeParallelWorkers(void)
+{
+ uint32 nfree_workers;
+
+ LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+ nfree_workers = AutoVacuumShmem->av_freeParallelWorkers;
+ LWLockRelease(AutovacuumLock);
+
+ return nfree_workers;
+}
+
/*
* autovac_init
* This is called at postmaster initialization.
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 3f5b59a15bd..f50c7462cd4 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -65,6 +65,7 @@ extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
/* parallel autovacuum stuff */
extern void AutoVacuumReserveParallelWorkers(int *nworkers);
extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+extern uint32 AutoVacuumGetFreeParallelWorkers(void);
/* shared memory stuff */
extern Size AutoVacuumShmemSize(void);
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 4c6d56d97d8..bfe365fa575 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
plsample \
spgist_name_ops \
test_aio \
+ test_autovacuum \
test_binaryheap \
test_bitmapset \
test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 1b31c5b98d6..01a3e3ec044 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
subdir('spgist_name_ops')
subdir('ssl_passphrase_callback')
subdir('test_aio')
+subdir('test_autovacuum')
subdir('test_binaryheap')
subdir('test_bitmapset')
subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..32254c53a5d
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,28 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+MODULE_big = test_autovacuum
+OBJS = \
+ $(WIN32RES) \
+ test_autovacuum.o
+
+EXTENSION = test_autovacuum
+DATA = test_autovacuum--1.0.sql
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..3441e5e49cf
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,36 @@
+# Copyright (c) 2024-2025, PostgreSQL Global Development Group
+
+test_autovacuum_sources = files(
+ 'test_autovacuum.c',
+)
+
+if host_system == 'windows'
+ test_autovacuum_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+ '--NAME', 'test_autovacuum',
+ '--FILEDESC', 'test_autovacuum - test code for parallel autovacuum',])
+endif
+
+test_autovacuum = shared_module('test_autovacuum',
+ test_autovacuum_sources,
+ kwargs: pg_test_mod_args,
+)
+test_install_libs += test_autovacuum
+
+test_install_data += files(
+ 'test_autovacuum.control',
+ 'test_autovacuum--1.0.sql',
+)
+
+tests += {
+ 'name': 'test_autovacuum',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'tap': {
+ 'env': {
+ 'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+ },
+ 'tests': [
+ 't/001_basic.pl',
+ ],
+ },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_basic.pl b/src/test/modules/test_autovacuum/t/001_basic.pl
new file mode 100644
index 00000000000..369d2905a2b
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_basic.pl
@@ -0,0 +1,325 @@
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+ plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it.
+
+sub prepare_for_next_test
+{
+ my ($node, $test_number) = @_;
+
+ $node->safe_psql('postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+ });
+
+ $node->safe_psql('postgres', qq{
+ UPDATE test_autovac SET col_1 = $test_number;
+ ANALYZE test_autovac;
+ });
+}
+
+sub wait_for_av_log
+{
+ my ($node, $expected_log) = @_;
+
+ $node->wait_for_log($expected_log);
+ truncate $node->logfile, 0 or die "truncate failed: $!";
+}
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('node1');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+ max_worker_processes = 20
+ max_parallel_workers = 20
+ max_parallel_maintenance_workers = 20
+ autovacuum_max_parallel_workers = 20
+ log_min_messages = debug2
+ log_autovacuum_min_duration = 0
+ autovacuum_naptime = '1s'
+ min_parallel_index_scan_size = 0
+ shared_preload_libraries=test_autovacuum
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+ plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+ CREATE EXTENSION test_autovacuum;
+ CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 4;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table with specified number of b-tree indexes on it
+$node->safe_psql('postgres', qq{
+ CREATE TABLE test_autovac (
+ id SERIAL PRIMARY KEY,
+ col_1 INTEGER, col_2 INTEGER, col_3 INTEGER, col_4 INTEGER
+ ) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+
+ DO \$\$
+ DECLARE
+ i INTEGER;
+ BEGIN
+ FOR i IN 1..$indexes_num LOOP
+ EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+ END LOOP;
+ END \$\$;
+});
+
+# Insert specified tuples num into the table
+$node->safe_psql('postgres', qq{
+ DO \$\$
+ DECLARE
+ i INTEGER;
+ BEGIN
+ FOR i IN 1..$initial_rows_num LOOP
+ INSERT INTO test_autovac VALUES (i, i + 1, i + 2, i + 3);
+ END LOOP;
+ END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can.
+# Also check whether all requested workers:
+# 1) launched
+# 2) correctly released
+
+prepare_for_next_test($node, 1);
+
+$node->safe_psql('postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+wait_for_av_log($node,
+ qr/parallel index vacuum\/cleanup: 2 workers were planned / .
+ qr/and 2 workers were launched in total/);
+
+$node->psql('postgres',
+ "SELECT get_parallel_autovacuum_free_workers();",
+ stdout => \$psql_out,
+);
+is($psql_out, 20, 'All parallel workers has been released by the leader');
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to parallel workers.
+
+prepare_for_next_test($node, 2);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+ SELECT injection_points_attach('autovacuum-leader-after-indexes-processing', 'wait');
+ SELECT injection_points_attach('parallel-worker-before-indexes-processing', 'wait');
+ SELECT inj_force_delay_point_attach();
+
+ ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum leader launches parallel worker and falls
+# asleep on the injection point
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-leader-before-indexes-processing'
+);
+
+# Reload config - leader worker must update its own parameters during indexes
+# processing
+$node->safe_psql('postgres', qq{
+ ALTER SYSTEM SET vacuum_cost_limit = 500;
+ ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+ ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+ ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+ SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+# Wait until leader worker is guaranteed to update parameters and propagate
+# their values to the parallel worker
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-leader-after-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-leader-after-indexes-processing');
+});
+
+# Now wake up the parallel worker and force it to call vacuum_delay_point
+$node->wait_for_event(
+ 'parallel worker',
+ 'parallel-worker-before-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('parallel-worker-before-indexes-processing');
+});
+
+# Check whether worker successfully updated all parameters
+wait_for_av_log($node,
+ qr/Vacuum cost-based delay parameters of parallel worker:\n/ .
+ qr/\tvacuum_cost_limit = 500\n/ .
+ qr/\tvacuum_cost_delay = 2\n/ .
+ qr/\tvacuum_cost_page_miss = 10\n/ .
+ qr/\tvacuum_cost_page_dirty = 10\n/ .
+ qr/\tvacuum_cost_page_hit = 10\n/);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+ SELECT injection_points_detach('autovacuum-leader-after-indexes-processing');
+ SELECT injection_points_detach('parallel-worker-before-indexes-processing');
+ SELECT inj_force_delay_point_detach();
+
+ ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+});
+
+
+# Test 3:
+# Test adjustment of free parallel workers number when changing
+# autovacuum_max_parallel_workers parameter
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-leader-before-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+ ALTER SYSTEM SET autovacuum_max_parallel_workers = 10;
+ SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+# Wait until the end of parallel processing
+wait_for_av_log($node,
+ qr/parallel index vacuum\/cleanup: 2 workers were planned / .
+ qr/and 2 workers were launched in total/);
+
+# When all parallel workers were released, the number of free parallel workers
+# must not exceed autovacuum_max_parallel_workers limit
+$node->psql('postgres',
+ "SELECT get_parallel_autovacuum_free_workers();",
+ stdout => \$psql_out,
+);
+is($psql_out, 10,
+ 'Number of free parallel workers is consistent');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+# Test 4:
+# We want parallel autovacuum workers to be released even if leader gets an
+# error. At first, simulate situation, when leader exites due to an ERROR.
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'error');
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+wait_for_av_log($node,
+ qr/error triggered for injection point / .
+ qr/autovacuum-leader-before-indexes-processing/);
+
+$node->psql('postgres',
+ "SELECT get_parallel_autovacuum_free_workers();",
+ stdout => \$psql_out,
+);
+is($psql_out, 10,
+ 'All parallel workers has been released by the leader after ERROR');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+# Test 5:
+# Same as above test, but simulate situation, when leader exites due to FATAL.
+
+prepare_for_next_test($node, 5);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-leader-before-indexes-processing'
+);
+
+my $av_pid = $node->safe_psql('postgres', qq{
+ SELECT pid FROM pg_stat_activity
+ WHERE backend_type = 'autovacuum worker'
+ AND wait_event = 'autovacuum-leader-before-indexes-processing'
+ LIMIT 1;
+});
+
+# Create role with pg_signal_autovacuum_worker for terminating autovacuum worker.
+$node->safe_psql('postgres', qq{
+ CREATE ROLE regress_worker_role;
+ GRANT pg_signal_autovacuum_worker TO regress_worker_role;
+ SET ROLE regress_worker_role;
+});
+
+$node->safe_psql('postgres', qq{
+ SELECT pg_terminate_backend('$av_pid');
+});
+
+wait_for_av_log($node,
+ qr/terminating autovacuum process due to administrator command/);
+
+$node->psql('postgres',
+ "SELECT get_parallel_autovacuum_free_workers();",
+ stdout => \$psql_out,
+);
+is($psql_out, 10,
+ 'All parallel workers has been released by the leader after FATAL');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
diff --git a/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
new file mode 100644
index 00000000000..679375fc82f
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
@@ -0,0 +1,20 @@
+/* src/test/modules/test_autovacuum/test_autovacuum--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION test_autovacuum" to load this file. \quit
+
+/*
+ * Functions for expecting shared autovacuum state
+ */
+
+CREATE FUNCTION get_parallel_autovacuum_free_workers()
+RETURNS INTEGER STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
+
+CREATE FUNCTION inj_force_delay_point_attach()
+RETURNS VOID STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
+
+CREATE FUNCTION inj_force_delay_point_detach()
+RETURNS VOID STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.c b/src/test/modules/test_autovacuum/test_autovacuum.c
new file mode 100644
index 00000000000..45050924f17
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.c
@@ -0,0 +1,166 @@
+/*-------------------------------------------------------------------------
+ *
+ * test_autovacuum.c
+ * Helpers to write tests for parallel autovacuum
+ *
+ * Copyright (c) 2020-2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * src/test/modules/test_autovacuum/test_autovacuum.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "commands/vacuum.h"
+#include "fmgr.h"
+#include "miscadmin.h"
+#include "postmaster/autovacuum.h"
+#include "storage/shmem.h"
+#include "storage/ipc.h"
+#include "storage/lwlock.h"
+#include "utils/builtins.h"
+#include "utils/injection_point.h"
+
+PG_MODULE_MAGIC;
+
+typedef struct InjPointState
+{
+ bool enabled_force_delay_point;
+} InjPointState;
+
+static InjPointState * inj_point_state;
+
+/* Shared memory init callbacks */
+static shmem_request_hook_type prev_shmem_request_hook = NULL;
+static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
+
+static void
+test_autovacuum_shmem_request(void)
+{
+ if (prev_shmem_request_hook)
+ prev_shmem_request_hook();
+
+ RequestAddinShmemSpace(sizeof(InjPointState));
+}
+
+static void
+test_autovacuum_shmem_startup(void)
+{
+ bool found;
+
+ if (prev_shmem_startup_hook)
+ prev_shmem_startup_hook();
+
+ /* Create or attach to the shared memory state */
+ LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
+
+ inj_point_state = ShmemInitStruct("injection_points",
+ sizeof(InjPointState),
+ &found);
+
+ if (!found)
+ {
+ /* First time through, initialize */
+ inj_point_state->enabled_force_delay_point = false;
+
+ InjectionPointAttach("parallel-autovacuum-force-delay-point",
+ "test_autovacuum",
+ "inj_force_delay_point",
+ NULL,
+ 0);
+ }
+
+ LWLockRelease(AddinShmemInitLock);
+}
+
+void
+_PG_init(void)
+{
+ if (!process_shared_preload_libraries_in_progress)
+ return;
+
+ prev_shmem_request_hook = shmem_request_hook;
+ shmem_request_hook = test_autovacuum_shmem_request;
+ prev_shmem_startup_hook = shmem_startup_hook;
+ shmem_startup_hook = test_autovacuum_shmem_startup;
+}
+
+extern PGDLLEXPORT void inj_force_delay_point(const char *name,
+ const void *private_data,
+ void *arg);
+
+
+PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
+Datum
+get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
+{
+ uint32 nfree_workers;
+
+#ifndef USE_INJECTION_POINTS
+ ereport(ERROR, errmsg("injection points not supported"));
+#endif
+
+ nfree_workers = AutoVacuumGetFreeParallelWorkers();
+
+ PG_RETURN_UINT32(nfree_workers);
+}
+
+/*
+ */
+void
+inj_force_delay_point(const char *name, const void *private_data, void *arg)
+{
+ ereport(LOG,
+ errmsg("force delay point injection point called"),
+ errhidestmt(true), errhidecontext(true));
+
+ if (inj_point_state->enabled_force_delay_point)
+ {
+ StringInfoData buf;
+
+ Assert(IsParallelWorker() && !AmAutoVacuumWorkerProcess());
+
+ /* Simulate config reload during normal processing */
+ pg_atomic_add_fetch_u32(VacuumActiveNWorkers, 1);
+ vacuum_delay_point(false);
+ pg_atomic_sub_fetch_u32(VacuumActiveNWorkers, 1);
+
+ initStringInfo(&buf);
+
+ appendStringInfo(&buf, "Vacuum cost-based delay parameters of parallel worker:\n");
+ appendStringInfo(&buf, "vacuum_cost_limit = %d\n", vacuum_cost_limit);
+ appendStringInfo(&buf, "vacuum_cost_delay = %g\n", vacuum_cost_delay);
+ appendStringInfo(&buf, "vacuum_cost_page_miss = %d\n", VacuumCostPageMiss);
+ appendStringInfo(&buf, "vacuum_cost_page_dirty = %d\n", VacuumCostPageDirty);
+ appendStringInfo(&buf, "vacuum_cost_page_hit = %d\n", VacuumCostPageHit);
+
+ ereport(LOG, errmsg("%s", buf.data));
+ pfree(buf.data);
+ }
+}
+
+PG_FUNCTION_INFO_V1(inj_force_delay_point_attach);
+Datum
+inj_force_delay_point_attach(PG_FUNCTION_ARGS)
+{
+#ifdef USE_INJECTION_POINTS
+ inj_point_state->enabled_force_delay_point = true;
+#else
+ ereport(ERROR, errmsg("injection points not supported"));
+#endif
+ PG_RETURN_VOID();
+}
+
+PG_FUNCTION_INFO_V1(inj_force_delay_point_detach);
+Datum
+inj_force_delay_point_detach(PG_FUNCTION_ARGS)
+{
+#ifdef USE_INJECTION_POINTS
+ inj_point_state->enabled_force_delay_point = false;
+#else
+ ereport(ERROR, errmsg("injection points not supported"));
+#endif
+ PG_RETURN_VOID();
+}
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.control b/src/test/modules/test_autovacuum/test_autovacuum.control
new file mode 100644
index 00000000000..1b7fad258f0
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.control
@@ -0,0 +1,3 @@
+comment = 'Test code for parallel autovacuum'
+default_version = '1.0'
+module_pathname = '$libdir/test_autovacuum'
--
2.43.0