On 2016-11-25 14:44, Christian Ruppert wrote:
Hi Willy,
On 2016-11-25 14:30, Willy Tarreau wrote:
Hi Christian,
On Fri, Nov 25, 2016 at 12:12:06PM +0100, Christian Ruppert wrote:
I'll compare HT/no-HT afterwards. In my first tests it didn't same to
make
much of a difference o far.
I also tried (in this case) to disable HT entirely and set it to max.
36
procs. Basically the same as before.
Also you definitely need to split your bind lines, one per process, to
take advantage of the kernel's ability to load balance between
multiple
queues. Otherwise the load is always unequal and many processes are
woken
up for nothing.
I have a default bind for process 1 which is basically the http
frontend and the actual backend, RSA is bound to another, single
process and ECC is bound to all the rest. So in this case SSL (in
particular ECC) is the problem. The connections/handshakes should be
*actually* using CPU+2 till NCPU. The only shared part should be the
backend but that should be actually no problem for e.g. 5 parallel
benchmarks as a single HTTP benchmark can make >20k requests/s.
global
nbproc 36
defaults:
bind-process 1
frontend http
bind :65410
mode http
default_backend bk_ram
frontend ECC
bind-process 3-36
bind :65420 ssl crt /etc/haproxy/test.pem-ECC
mode http
default_backend bk_ram
backend bk_ram
mode http
fullconn 75000
errorfile 503 /etc/haproxy/test.error
Regards,
Willy
It seems to be the NIC or rather driver/kernel. Using Intel's
set_irq_affinity (set_irq_affinity -x local eth2 eth3) seems to do the
trick, at least at the first glance.
--
Regards,
Christian Ruppert