Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).
Sorry to drag up an old thread. But do we have an eta for a new release of version 1.8 that contains the fix? I noticed that the 2.x versions have been updated, and wanted to make sure that 1.8 has not been left out by mistake. Thankyou Mark On Wed, 16 Dec 2020 at 10:03, Peter Statham wrote: > > On Wed, 16 Dec 2020 at 08:40, Christopher Faulet wrote: > > > > Le 11/12/2020 à 21:34, Peter Statham a écrit : > > > > > > The patch seems to fix the issue. > > > > > > > Peter, > > > > The fix was backported to the 1.8. Thanks ! > > > > -- > > Christopher Faulet > > Hello Christopher, > > Thank you for your time finding the cause and the solution to this. > > -- > > Peter Statham > Loadbalancer.org Ltd. > -- Mark Brookes Loadbalancer.org Ltd. www.loadbalancer.org +44 (0)330 380 1064 m...@loadbalancer.org
Throughput issue after moving between kernels.
Hi All, We have been investigating an issue with reduced throughput. (its quite possible that its nothing to do with HAProxy.) I thought I would just check here to see if this rings a bell with anyone. We are currently looking to update our kernel from 3.10.18 to 4.4.49. It appears that in the move from 3.x.x to 4.x.x at some point the kernel devs change the tcp_mem calculation which results in halving the values based on the same amount of RAM. Although that isnt the problem it just highlighted it. Our test setup is Mutiple Clients --> Haproxy --> Real Server. If I run a fairly heavy load using iperf through haproxy using the 3.10.18 kernel and I check - cat proc/net/sockstat sockets: used 193 TCP: inuse 116 orphan 0 tw 17 alloc 118 mem 25591 UDP: inuse 12 mem 3 UDPLITE: inuse 0 RAW: inuse 1 FRAG: inuse 0 memory 0 cat /proc/sys/net/ipv4/tcp_mem 89544 119392 179088 When I reboot into the 4.4.49 kernel and run the same test I get - cat proc/net/sockstat sockets: used 198 TCP: inuse 115 orphan 0 tw 18 alloc 117 mem 43957 UDP: inuse 12 mem 2 UDPLITE: inuse 0 RAW: inuse 1 FRAG: inuse 0 memory 0 cat /proc/sys/net/ipv4/tcp_mem 44721 59631 89442 Haproxy -- Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.3 Running on zlib version : 1.2.3 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with OpenSSL version : OpenSSL 1.0.2j-fips 26 Sep 2016 Running on OpenSSL version : OpenSSL 1.0.2j-fips 26 Sep 2016 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 7.8 2008-09-05 Running on PCRE version : 7.8 2008-09-05 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built without Lua support Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available filters : [SPOE] spoe [TRACE] trace [COMP] compression Ive tried 4.4.95 kernel and get the same result Ive also tried the 4.9.59. Ive tried the latest 1.7.9 HAProxy too. Does anyone have any ideas? Thanks -- Mark Brookes Loadbalancer.org Ltd. www.loadbalancer.org +44 (0)330 380 1064 m...@loadbalancer.org
Re: Getting JSON encoded data from the stats socket.
Could we perhaps group by the node then process_num then? {nodename:value: {pid: pid1: { haproxy: { Uptime_sec:100, PoolFailed:1 } stats: { "frontend": { "www.haproxy.org": { "bin": "", "lbtot": "55", ... }, "www.haproxy.com": { "bin": "", "lbtot": "55", ... }, }, "backend": { "www.haproxy.org": { "bin": "", "lbtot": "55", "server": { "srv1": { "bin": "", "lbtot": "55", }, ... } }, {pid: pid2: { haproxy: { Uptime_sec:100, PoolFailed:1 } stats: { "frontend": { "www.haproxy.org": { "bin": "", "lbtot": "55", ... }, "www.haproxy.com": { "bin": "", "lbtot": "55", ... }, }, "backend": { "www.haproxy.org": { "bin": "", "lbtot": "55", "server": { "srv1": { "bin": "", "lbtot": "55", }, ... } }, ignore the close brackets im pretty sure they are wrong, but you get the idea. On 26 July 2016 at 14:30, Willy Tarreau wrote: > Hi Pavlos! > > On Tue, Jul 26, 2016 at 03:23:01PM +0200, Pavlos Parissis wrote: >> Here is a suggestion >> { >> "frontend": { >> "www.haproxy.org": { >> "bin": "", >> "lbtot": "55", >> ... >> }, >> "www.haproxy.com": { >> "bin": "", >> "lbtot": "55", >> ... >> }, >> }, >> "backend": { >> "www.haproxy.org": { >> "bin": "", >> "lbtot": "55", >> >> "server": { >> "srv1": { >> "bin": "", >> "lbtot": "55", >> >> }, >> ... >> } >> >> }, >> }, >> "haproxy": { >> "id1": { >> "PipesFree": "555", >> "Process_num": "1", >> ... >> }, >> "id2": { >> "PipesFree": "555", >> "Process_num": "2", >> ... >> }, >> ... >> }, >> } > > Thanks. How does it scale if we later want to aggregate these ones over > multiple processes and/or nodes ? The typed output already emits a > process number for each field. Also, we do have the information of how > data need to be parsed and aggregated. I suspect that we want to produce > this with the JSON output as well so that we don't lose information when > dumping in JSON mode. I would not be surprized if people find JSON easier > to process than our current format to aggregate their stats, provided we > have all the fields :-) > > Cheers, > Willy
Re: Getting JSON encoded data from the stats socket.
>So for sure I definitely support this proposal :-) Thats great news. Do you have a JSON structure in mind? Or would you like me to come up with something? On 5 July 2016 at 18:04, Willy Tarreau wrote: > Hi Mark, > > On Tue, Jul 05, 2016 at 10:05:13AM +0100, Mark Brookes wrote: >> Hi Willy/All >> >> I wondered if we could start a discussion about the possibility of >> having the stats socket return stats data in JSON format. >> >> Im primarily interested in the data that is returned by issuing a >> 'show stat' which is normally returned as a csv. >> >> I wont go into specifics as to how the data would be structured, we >> can decide on that later (Assuming you are happy with this idea). >> >> Ive approached Simon Horman and hes happy to do the work for us. >> >> Please let me know your thoughts > > Well, I completely reworked the stats internals recently for two > purposes : > 1) bringing the ability to dump them in another format such as JSON ; > 2) making it easier to aggregate them over multiple processes/nodes > > So for sure I definitely support this proposal :-) > > Best regards > Willy
Getting JSON encoded data from the stats socket.
Hi Willy/All I wondered if we could start a discussion about the possibility of having the stats socket return stats data in JSON format. Im primarily interested in the data that is returned by issuing a 'show stat' which is normally returned as a csv. I wont go into specifics as to how the data would be structured, we can decide on that later (Assuming you are happy with this idea). Ive approached Simon Horman and hes happy to do the work for us. Please let me know your thoughts Thanks Mark