Hi,

> There are some bugs with splice in 1.5-dev19... they have been fixed.
> 
> See this thread for the patches:
> http://comments.gmane.org/gmane.comp.web.haproxy/12774
> 
> (Or google for: "Oh and by the way, the bug was present since 1.5-dev12." )

This is not what Annika is seeing; that bug is about 100% cpu load in
userspace haproxy, but Annika is seeing higher system load.



>> Also please tell:
>> - hardware (cpu/ram/nic at least) on old/new cluster
>
> - Two Intel(R) Xeon(R) CPU X6550 @ 2.00GHz in each cluster node
> - 2x Emulex Corporation OneConnect 10Gb NIC (rev 02) in each cluster node
> - 32gbit RAM in each cluster node
> - Two nodes per cluster (active-active in the new one)

The hardware of the old and the new cluster is the same?



> - Debian Squeeze / 3.1.0-1-amd64 / Tickrate 250
> - CentOS release 6.4 (Final) / 3.11.5-1.el6 / Tickrate 1000

The higher the tickrate, the higher the CPU load. You quadripled
the tickrate, and your load what - quadripled? I suggest you
try a lower tickrate in the very same configuration.

That said, splice should be way more efficient in 3.11 than in 3.1.



>> Are you using splice-auto or forcing splice by configuring
>> splice-request / splice-response?
>> 
> - We are forcing by splice-request / splice-responce

I believe splice is not always more efficient than recv/send; use splice-auto
to use it less aggressively (doc: splice-auto):

> Haproxy uses heuristics to estimate if kernel splicing might improve
> performance or not. Both directions are handled independently. Note
> that the heuristics used are not much aggressive in order to limit
> excessive use of splicing.

Don't know if those heuristics are still fully valid for post 3.5 kernels,
but it probably doesn't hurt.



Regards,

Lukas                                     

Reply via email to