If there are no remaining concerns, I'd like to move forward with
committing v9 in September's commitfest.
--
nathan
rebased
--
nathan
>From 61513f744012c2b9b59085ce8c4a960da9e56ee7 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Sat, 22 Jun 2024 15:05:44 -0500
Subject: [PATCH v9 1/1] allow changing autovacuum_max_workers without
restarting
---
doc/src/sgml/config.sgml |
alues of other GUCs. Here is a new version
of the patch that adds the WARNING described above.
--
nathan
>From e59c8199858b0331c2d9ec7a40d26f0e89657bf4 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Sat, 22 Jun 2024 15:05:44 -0500
Subject: [PATCH v8 1/1] allow changing autovacuum_max_
iling instead of
silently proceeding with a different value than the user configured. Any
thoughts?
--
nathan
>From bd486d1ab302c4654b9cfbc57230bcf9b140711e Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Sat, 22 Jun 2024 15:05:44 -0500
Subject: [PATCH v7 1/1] allow changing autovacuum_max
.
--
nathan
>From c1c33c6c157a7cec81180714369b2978b09e402f Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Sat, 22 Jun 2024 15:05:44 -0500
Subject: [PATCH v6 1/1] allow changing autovacuum_max_workers without
restarting
---
doc/src/sgml/config.sgml | 28 +++-
doc
pth(int *newval, void **extra, GucSource source);
extern void assign_max_stack_depth(int newval, void *extra);
extern bool check_multixact_member_buffers(int *newval, void **extra,
--
2.39.3 (Apple Git-146)
>From 48f7d56b99f2d98533c8767d5891a78c8411872d Mon Sep 17 00:00:00 2001
From: Nathan
On Tue, Jun 18, 2024 at 02:33:31PM -0700, Andres Freund wrote:
> Another one:
>
> Have a general cap of 64, but additionally limit it to something like
> max(1, min(WORKER_CAP, max_connections / 4))
>
> so that cases like tap tests don't end up allocating vastly more worker slots
> than actu
Hi,
On 2024-06-18 16:09:09 -0500, Nathan Bossart wrote:
> On Tue, Jun 18, 2024 at 01:43:34PM -0700, Andres Freund wrote:
> > I just don't see much point in reserving 256 worker "possibilities", tbh. I
> > can't think of any practical system where it makes sense to use this much
> > (nor
> > do I
On Tue, Jun 18, 2024 at 01:43:34PM -0700, Andres Freund wrote:
> I just don't see much point in reserving 256 worker "possibilities", tbh. I
> can't think of any practical system where it makes sense to use this much (nor
> do I think it's going to be reasonable in the next 10 years) and it's just
Hi,
On 2024-06-18 14:00:00 -0500, Nathan Bossart wrote:
> On Mon, Jun 03, 2024 at 04:24:27PM -0700, Andres Freund wrote:
> > On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:
> >> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:
> >> > Why do we think that increasing the number of
On Mon, Jun 03, 2024 at 04:24:27PM -0700, Andres Freund wrote:
> On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:
>> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:
>> > Why do we think that increasing the number of PGPROC slots, heavyweight
>> > locks
>> > etc by 256 isn't going
Hi,
On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:
> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:
> > Why do we think that increasing the number of PGPROC slots, heavyweight
> > locks
> > etc by 256 isn't going to cause issues? That's not an insubstantial amount
> > of
>
On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:
> I don't have time to read through the entire thread right now - it'd be good
> for the commit message of a patch like this to include justification for why
> it's ok to make such a change. Even before actually committing it, so
> revi
Hi,
On 2024-06-03 13:52:29 -0500, Nathan Bossart wrote:
> Here is an updated patch that uses 256 as the upper limit.
I don't have time to read through the entire thread right now - it'd be good
for the commit message of a patch like this to include justification for why
it's ok to make such a cha
ar.
I plan to further improve this section of the documentation in v18, so I've
left the constant unexplained for now.
--
nathan
>From 056ad035c5d213f7ae49f5feb28229f35086430f Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Tue, 7 May 2024 10:59:24 -0500
Subject: [PATCH v4 1/1] allow changin
On Thu, May 16, 2024 at 04:37:10PM +, Imseih (AWS), Sami wrote:
> I thought 256 was a good enough limit. In practice, I doubt anyone will
> benefit from more than a few dozen autovacuum workers.
> I think 1024 is way too high to even allow.
WFM
> I don't think combining 1024 + 5 = 1029 is a
>>> That's true, but using a hard-coded limit means we no longer need to add a
>>> new GUC. Always allocating, say, 256 slots might require a few additional
>>> kilobytes of shared memory, most of which will go unused, but that seems
>>> unlikely to be a problem for the systems that will run Postgr
ossart
Amazon Web Services: https://aws.amazon.com
>From 72e0496294ef0390c77cef8031ae51c1a44ebde8 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Tue, 7 May 2024 10:59:24 -0500
Subject: [PATCH v3 1/1] allow changing autovacuum_max_workers without
restarting
---
doc/src/sgml/config.sgml
> That's true, but using a hard-coded limit means we no longer need to add a
> new GUC. Always allocating, say, 256 slots might require a few additional
> kilobytes of shared memory, most of which will go unused, but that seems
> unlikely to be a problem for the systems that will run Postgres v18.
On Mon, Apr 15, 2024 at 05:41:04PM +, Imseih (AWS), Sami wrote:
>> Another option could be to just remove the restart-only GUC and hard-code
>> the upper limit of autovacuum_max_workers to 64 or 128 or something. While
>> that would simplify matters, I suspect it would be hard to choose an
>> a
On Fri, Apr 19, 2024 at 4:29 PM Nathan Bossart wrote:
> I certainly don't want to hold up $SUBJECT for a larger rewrite of
> autovacuum scheduling, but I also don't want to shy away from a larger
> rewrite if it's an idea whose time has come. I'm looking forward to
> hearing your ideas in your pg
> Part of the underlying problem here is that, AFAIK, neither PostgreSQL
> as a piece of software nor we as human beings who operate PostgreSQL
> databases have much understanding of how autovacuum_max_workers should
> be set. It's relatively easy to hose yourself by raising
> autovacuum_max_worker
On Fri, Apr 19, 2024 at 02:42:13PM -0400, Robert Haas wrote:
> I think this could help a bunch of users, but I'd still like to
> complain, not so much with the desire to kill this patch as with the
> desire to broaden the conversation.
I think I subconsciously hoped this would spark a bigger discu
On Fri, Apr 19, 2024 at 11:43 AM Nathan Bossart
wrote:
> Removed in v2. I also noticed that I forgot to update the part about when
> autovacuum_max_workers can be changed. *facepalm*
I think this could help a bunch of users, but I'd still like to
complain, not so much with the desire to kill th
On Thu, Apr 18, 2024 at 05:05:03AM +, Imseih (AWS), Sami wrote:
> I looked at the patch set. With the help of DEBUG2 output, I tested to ensure
> that the the autovacuum_cost_limit balance adjusts correctly when the
> autovacuum_max_workers value increases/decreases. I did not think the
> pa
> Here is a first attempt at a proper patch set based on the discussion thus
> far. I've split it up into several small patches for ease of review, which
> is probably a bit excessive. If this ever makes it to commit, they could
> likely be combined.
I looked at the patch set. With the help of DEB
Agree +1,From a dba perspective, I would prefer that this parameter can be
dynamically modified, rather than adding a new parameter,What is more
difficult is how to smoothly reach the target value when the setting is
considered to be too large and needs to be lowered.
Regards
On Tue, 16 Apr 202
> Another option could be to just remove the restart-only GUC and hard-code
> the upper limit of autovacuum_max_workers to 64 or 128 or something. While
> that would simplify matters, I suspect it would be hard to choose an
> appropriate limit that won't quickly become outdated.
Hardcoded values a
On Mon, Apr 15, 2024 at 11:28:33AM -0500, Nathan Bossart wrote:
> On Mon, Apr 15, 2024 at 08:33:33AM -0500, Justin Pryzby wrote:
>> On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:
>>> The proof-of-concept patch keeps autovacuum_max_workers as the maximum
>>> number of slots to reser
On Mon, Apr 15, 2024 at 08:33:33AM -0500, Justin Pryzby wrote:
> On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:
>> The proof-of-concept patch keeps autovacuum_max_workers as the maximum
>> number of slots to reserve for workers, but I think we should instead
>> rename this paramete
On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:
> The attached proof-of-concept patch demonstrates what I have in mind.
> Instead of trying to dynamically change the global process table, etc., I'm
> proposing that we introduce a new GUC that sets the effective maximum
> number of a
Here is a first attempt at a proper patch set based on the discussion thus
far. I've split it up into several small patches for ease of review, which
is probably a bit excessive. If this ever makes it to commit, they could
likely be combined.
--
Nathan Bossart
Amazon Web Services: https://aws.a
> IIRC using GUC hooks to handle dependencies like this is generally frowned
> upon because it tends to not work very well [0]. We could probably get it
> to work for this particular case, but IMHO we should still try to avoid
> this approach.
Thanks for pointing this out. I agree, this could lea
On Fri, Apr 12, 2024 at 10:17:44PM +, Imseih (AWS), Sami wrote:
>>> Hm. Maybe the autovacuum launcher could do that.
>
> Would it be better to use a GUC check_hook that compares the
> new value with the max allowed values and emits a WARNING ?
>
> autovacuum_max_workers already has a check_a
>> 1/ We should emit a log when autovacuum_workers is set higher than the max.
>> Hm. Maybe the autovacuum launcher could do that.
Would it be better to use a GUC check_hook that compares the
new value with the max allowed values and emits a WARNING ?
autovacuum_max_workers already has a check
On Fri, Apr 12, 2024 at 05:27:40PM +, Imseih (AWS), Sami wrote:
> A few comments on the POC patch:
Thanks for reviewing.
> 1/ We should emit a log when autovacuum_workers is set higher than the max.
Hm. Maybe the autovacuum launcher could do that.
> 2/ should the name of the restart limit
I spent sometime reviewing/testing the POC. It is relatively simple with a lot
of obvious value.
I tested with 16 tables that constantly reach the autovac threashold and the
patch did the right thing. I observed concurrent autovacuum workers matching
the setting as I was adjusting it dynamically.
On Thu, Apr 11, 2024 at 03:37:23PM +, Imseih (AWS), Sami wrote:
>> My concern with this approach is that other background workers could use up
>> all the slots and prevent autovacuum workers from starting
>
> That's a good point, the current settings do not guarantee that you
> get a worker fo
> My concern with this approach is that other background workers could use up
> all the slots and prevent autovacuum workers from starting
That's a good point, the current settings do not guarantee that you
get a worker for the purpose if none are available,
i.e. max_parallel_workers_per_gather,
On Thu, Apr 11, 2024 at 09:42:40AM -0500, Nathan Bossart wrote:
> On Thu, Apr 11, 2024 at 02:24:18PM +, Imseih (AWS), Sami wrote:
>> max_worker_processes defines a pool of max # of background workers allowed.
>> parallel workers and extensions that spin up background workers all utilize
>> fr
On Thu, Apr 11, 2024 at 02:24:18PM +, Imseih (AWS), Sami wrote:
> max_worker_processes defines a pool of max # of background workers allowed.
> parallel workers and extensions that spin up background workers all utilize
> from
> this pool.
>
> Should autovacuum_max_workers be able to utili
> I frequently hear about scenarios where users with thousands upon thousands
> of tables realize that autovacuum is struggling to keep up. When they
> inevitably go to bump up autovacuum_max_workers, they discover that it
> requires a server restart (i.e., downtime) to take effect, causing furthe
I frequently hear about scenarios where users with thousands upon thousands
of tables realize that autovacuum is struggling to keep up. When they
inevitably go to bump up autovacuum_max_workers, they discover that it
requires a server restart (i.e., downtime) to take effect, causing further
frustr
43 matches
Mail list logo