On Tue, Dec 27, 2016 at 09:47:12AM +0100, Elias Abacioglu wrote:
> On Thu, Dec 22, 2016 at 11:06 AM, Willy Tarreau wrote:
> 
> > > As for my multi proc ssl setup in case anyone was wondering:
> > > I did a ssl-offload listener that runs on all cores except core0 on each
> > > cpu + it's HT sibling.
> > > relaying via unix sockets to a frontend that runs on core0 on each cpu
> > and
> > > it's HT siblings, so (0,1,28,29 in my case).
> >
> > So you have cross-CPU communications, that's really bad for performance and
> > latency. However you can have your cpu0 cores relay to the cpu0 listener
> > and
> > the cpu1 cores rely to the cpu1 listener.
> 
> 
> How would I achieve this configuration wise?
> How do I tell the server line to only send traffic from cpu0 or cpu1?

You can't do this since you have no control over this. The correct
practice is to have distinct network interfaces physically attached
to their respective CPU sockets, and to have dedicated routes to reach
various destinations via the respective interfaces, or to force the
source address to match one of the interfaces you want.

Eg: let's say you have eth0 and eth1 attached to cpu0, and eth2+eth3 attached
to cpu1. You can imagine having eth0 and eth2 as the frontend interfaces,
and eth1+eth3 as the backend interfaces.

Then you can have eth0 and eth2 in the same LAN and a shared IP address
for the frontend in another LAN. The front switch/router/L3/L4 LB will
split the incoming traffic through these NICs to the reach the shared
IP address. Then you have one haproxy process responsible for eth0+eth1
and another one for eth2+eth3. Similarly to the frontend, on the backend
side you have eth1 and eth3 in the same LAN, and your haproxy processes
reaching the various servers using these interfaces. Since they have a
different IP address, there's never any doubt regarding the return route.

Example :

    listen lb1
        bind 192.168.1.1:80 interface eth0 process 1
        server srv1 192.168.2.101:80 source 192.168.2.1
        server srv2 192.168.2.102:80 source 192.168.2.1
        server srv3 192.168.2.103:80 source 192.168.2.1
        server srv4 192.168.2.104:80 source 192.168.2.1

    listen lb2
        bind 192.168.1.1:80 interface eth1 process 2
        server srv1 192.168.2.101:80 source 192.168.2.2
        server srv2 192.168.2.102:80 source 192.168.2.2
        server srv3 192.168.2.103:80 source 192.168.2.2
        server srv4 192.168.2.104:80 source 192.168.2.2

eth0 : 192.168.0.1/24
eth1 : 192.168.0.2/24
lo   : 192.168.1.1/24
eth2 : 192.168.2.1/24
eth3 : 192.168.2.2/24

With eth0 and eth1 bound to cpu0, you attach process 1 to other cores of the
same CPU socket. As you can see above, traffic flowing through process 1
never touches the second NIC. Similar for process 2.

I hope it's clear,
Willy


Reply via email to