On Tue, Oct 12, 2021 at 6:05 PM Ruediger Pluem <rpl...@apache.org> wrote:
>
> On 10/12/21 4:58 PM, Yann Ylavic wrote:
> >
> > Shouldn't we also compute a busy_thread_count and kill children
> > if/when later things settle down and thus busy_thread_count <
> > max_spare_threads (like in the attached patch)?
>
> Hm. Wouldn't it be better instead to ensure that idle_thread_count is "far" 
> larger than the min_spare_threads something like.
> idle_thread_count > active_daemons_limit/2/num_buckets*ThreadsPerChild + 
> min_spare_threads

Right, better. Given that max_workers = active_daemons_limit *
ThreadsPerChild, let's rewrite this as:
 idle_thread_count > (max_workers/2 + min_spare_threads) / num_buckets
I don't get the +min_spare_threads though, if "min_spare_threads >=
max_workers/2" this would be above max_workers thus never trigger?

When about to kill children we already know that "idle_thread_count >
max_spare_threads/num_buckets" (the block we are in), so above
max_spare_threads is not necessarily far larger than min_spare_threads
but far enough?

Given also that "active_daemons_limit <= server_limit" here (per
event_check_config), how about we simply test for:
  if (retained->total_daemons <= active_daemons_limit
      || idle_thread_count > max_workers/{2,3,4} / num_buckets) {
      /* kill one child */
  }
  else {
      /* still busy enough, don't kill */
  }
?

>
> (2 is arbitrary choice could be also larger to start earlier reducing daemons)

4 would be fine by me, opinions?

>
> This would prevent the situation that we have a dying daemon and a rise of 
> requests would cause the start of a new daemon "soon".

By killing one child we'll "retained->total_daemons <=
active_daemons_limit" again and come back to normal logic: kill while
"idle_thread_count > max_spare_threads/num_buckets" and no new child
started until "idle_thread_count < min_spare_threads / num_buckets"
(and since idle_spawn_rate was reset to 1 there will be no storm
when/if the latter happens).
So we should be good?


Regards;
Yann.

Reply via email to