Hi,

On 09/15/2010 03:44 AM, Robert Haas wrote:
Hmm.  So what happens if you have 1000 databases with a minimum of 1
worker per database and an overall limit of 10 workers?

The first 10 databases would get an idle worker. As soon as real jobs arrive, the idle workers on databases that don't have any pending jobs get terminated in favor of the databases for which there are pending jobs. Admittedly, that mechanism isn't too clever, yet. I.e. if there always are enough jobs for one database, the others could starve.

With 1000 databases and a max of only 10 workers, chances for having a spare worker for the database that gets the next job are pretty low, yes. But that's the case with the proposed 5 minute timeout as well.

Lowering that timeout wouldn't increase the chance. And while it might make the start of a new bgworker quicker in the above mentioned case, I think there's not much advantage over just setting max_idle_background_workers = 0.

OTOH such a timeout would be easy enough to implement. The admin would be faced with yet another GUC, though.

Hmm, I see.  That's probably not helpful for autovacuum, but I can see
it being useful for replication.

Glad to hear.

I still think maybe we ought to try
to crack the nut of allowing backends to rebind to a different
database.  That would simplify things here a good deal, although then
again maybe it's too complex to be worth it.

Also note that it would re-introduce some of the costs we try to avoid with keeping the connected bgworker around. And in case you afford having at least a few spare bgworkers around per database (i.e. less than 10 or 20 databases), potential savings seem to be negligible again.

Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to