Alvaro Herrera <alvhe...@2ndquadrant.com> writes: > Tom Lane wrote: >> I'm coming back to the idea that at least in the back branches, the >> thing to do is allow maybe_start_bgworker to start multiple workers. >> >> Is there any actual evidence for the claim that that might have >> bad side effects?
> Well, I ran tests with a few dozen thousand sample workers and the > neglect for other things (such as connection requests) was visible, but > that's probably not a scenario many servers run often currently. Indeed. I'm pretty skeptical that that's an interesting case, and if it is, the current coding is broken anyway, because with that many workers you are going to start noticing that running maybe_start_bgworker over again for each worker is an O(N^2) proposition. Admittedly, iterating the loop in maybe_start_bgworker is really cheap compared to a fork(), but eventually the big-O problem is going to eat your lunch. > I don't strongly object to the idea of removing the "return" in older > branches, since it's evidently a problem. However, as bgworkers start > to be used more, I think we should definitely have some protection. In > a system with a large number of workers available for parallel queries, > it seems possible for a high velocity server to get stuck in the loop > for some time. (I haven't actually verified this, though. My > experiments were with the early kind, static bgworkers.) It might be sensible to limit the number of workers launched per call, but I think the limit should be quite a bit higher than 1 ... something like 100 or 1000 might be appropriate. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers