On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:

> On Fri, Apr 20, 2018 at 03:50:52PM +0300, Slawa Olhovchenkov wrote:
> > Also some strange: after resart I see 100% busy on CPU#1 (other CPU as
> > before -- from 0.05 to 0.4). This is busy loop over kevent:
> > 
> > kqfd 11 cl 0 nc 0 eventlist 813400000 nevent 200 timeout 0.2000000
> > ret 11 errno 0
> > 
> > ev_kqueue.c:128
> > 
> > Like some events not removed from eventlist and permanently
> > re-activated.
> 
> I'm just realizing that you're not on linux, sorry. The multi-bind trick
> I proposed only works there as the system is the one doing the LB between
> the sockets.
> 
> In your case it's different, only one thread will likely take the traffic
> (the last one bound) as it's the one whose socket replaces the previous
> ones.
> 
> Thus for you it's better to stick to a single listener, and if you want to
> increase the fairness between the sockets, you can reduce tune.maxaccept in
> the global section like below :
> 
>   global
>      tune.maxaccept 8

No significant differenece: average load rised, unequal CPU load still
at same level

> The kqueue issue you report is still unclear to me however, I'm not much
> used to kqueue and always having a hard time decoding it.


Reply via email to