On Thu, Apr 26, 2018 at 09:49:53AM +0200, Willy Tarreau wrote:

> On Thu, Apr 26, 2018 at 10:35:51AM +0300, Slawa Olhovchenkov wrote:
> > On Thu, Apr 26, 2018 at 09:25:59AM +0200, Willy Tarreau wrote:
> > 
> > > On Thu, Apr 26, 2018 at 10:21:27AM +0300, Slawa Olhovchenkov wrote:
> > > > > > Pollers distinct from frontend?
> > > > > > Can I bind pollers to CPU?
> > > > > 
> > > > > Each thread has its own poller. Since you map threads to CPUs you 
> > > > > indeed
> > > > > have one poller per CPU.
> > > > 
> > > > Each pooler pool all sockets or only sockets from binded frontends?
> > > 
> > > All sockets. All FDs in fact. This is normal, it's an event loop, it needs
> > > to be notified of *any* event (fd activity, signal).
> > 
> > I am mean in case of dedicated listen socket pooler also can be
> > dedicated, for load planing. For example:
> > 
> > frontend tcp1
> >         bind x.x.x.206:443
> >         bind-process 1/9-1/16
> >         mode tcp
> > 
> > threads 1-8 don't need any events from this socket at all.
> 
> That's exactly what happens.

No, as I see. After start load on this sockets I am see CPU use on CPU's
0-7 too. All CPU rise load simultaneous.

May be I am miss some in config?

> > This is also reduce communication w/ kernel, rise locality of data.
> > 
> > I mean locality accpeted socket only to one pooler will be good too.
> 
> It's what is done, don't worry. Please take a look at the code, namely
> fdtab[] in fd.h. You'll see a polled_mask and thread_mask for each fd,
> used to know what thread needs knowledge of the fd and what thread's
> poller currently polls the fd.
> 
> Willy

Reply via email to