changes in 2.5
Hello list, I have a consumer of the master socket’s `show proc` output and I observed that 2.5 changed its lay out, and this change lead me to two doubts: - Is there a release notes or something with all the backward compatibility changes between minor versions? I observed that 2.5 now requires that var() has a match type, but maybe there are other changes that I’m missing and I should take care; - Do you suggest me a good way to identify the proxy version so I can use a distinct parser for the show proc output? I’m currently using the last field of the second line, but might be a better option out there that I’m not aware of. ~jm
Re: [EXTERNAL] [PATCH] BUILD/MINOR fix solaris build with clang
Ah right then I use illumos based systems (which comes from opensolaris). solaris 11 is a bit unstable on a VM without fixes. On Mon, 17 Jan 2022 at 18:38, Willy TARREAU wrote: > > On Mon, Jan 17, 2022 at 05:38:46PM +, David CARLIER wrote: > > Mostly gcc 7.5 and clang 9 nowadays. > > Sorry I was not clear :-) I meant, do you build on an old Sun, in an > x86 VM, with a real Solaris or an OpenSolaris (or some variant) etc. > > I used to keep my old Sun working for a while because SPARC was extremely > picky on alignment and could often detect issues that are harder to spot > on other archs. I gave up when the CPU more or less died (the usual SPARC > thing, L2 cache corruption all the time). I wondered if switching to x86 > to keep a Solaris would bring me any benefit or not in terms of coverage. > Maybe it can if it still spots build issues (and we have evports there), > but I'm wondering about the least invasive setup we can think about in > this case. > > Thanks! > Willy
Re: [PATCH] BUG/MEDIUM: server: avoid changing healthcheck ctx with set server ssl
Hello Christopher, On Wed, Jan 12, 2022 at 12:45 PM William Dauchy wrote: > my approach was to say: > - remove the implicit behavior > - then work on the missing commands for the health checks Do you think we can conclude on it? -- William
Re: [EXTERNAL] [PATCH] BUILD/MINOR fix solaris build with clang
On Mon, Jan 17, 2022 at 05:38:46PM +, David CARLIER wrote: > Mostly gcc 7.5 and clang 9 nowadays. Sorry I was not clear :-) I meant, do you build on an old Sun, in an x86 VM, with a real Solaris or an OpenSolaris (or some variant) etc. I used to keep my old Sun working for a while because SPARC was extremely picky on alignment and could often detect issues that are harder to spot on other archs. I gave up when the CPU more or less died (the usual SPARC thing, L2 cache corruption all the time). I wondered if switching to x86 to keep a Solaris would bring me any benefit or not in terms of coverage. Maybe it can if it still spots build issues (and we have evports there), but I'm wondering about the least invasive setup we can think about in this case. Thanks! Willy
2.0.26 breaks authentication
Hi Configuration uses 'no option http-use-htx' in defaults because of case insensitivity. Statistics path haproxy?stats is behind simple username/password and both credentials are specified in config. When accessing haproxy?stats, 2.0.25 works fine, but 2.0.26 returns 401: 2022-01-17T18:34:50.643782+00:00 hostname.com haproxy[6125]: x.y.z.112:56316 x.y.z.161:443 [17/Jan/2022:18:34:50.643] main_frontend~ main_frontend/ -1/-1/-1/-1/0 401 278 - - PR-- 1/1/0/0/5 0/0 "GET /haproxy?stats HTTP/1.1" Both versions are self compiled and use exactly the same build config and environment. HA-Proxy version 2.0.26-051d585 2021/12/03 - https://haproxy.org/ Build options : TARGET = linux-glibc CPU = generic CC = gcc CFLAGS = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_THREAD=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_TFO=1 USE_SYSTEMD=1 Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS Default settings : bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with multi-threading support (MAX_THREADS=64, default=1). Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 Built with Lua version : Lua 5.4.3 Built with network namespace support. Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Built with zlib version : 1.2.7 Running on zlib version : 1.2.7 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with PCRE2 version : 10.23 2017-02-14 PCRE2 library supports JIT : yes Encrypted password support via crypt(3): yes Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available multiplexer protocols : (protocols marked as cannot be specified using 'proto' keyword) h2 : mode=HTXside=FE|BE mux=H2 h2 : mode=HTTP side=FEmux=H2 : mode=HTXside=FE|BE mux=H1 : mode=TCP|HTTP side=FE|BE mux=PASS Available services : none Available filters : [SPOE] spoe [COMP] compression [CACHE] cache [TRACE] trace With best regards, Veiko
Re: [EXTERNAL] [PATCH] MEDIUM: pool monitor memory pressure on macOs
Hi, I kind of knew it would trigger some controversy (especially using a so specific API as you highlighted. About your remarks on the CFLAGS, I had it automatically in my shell environment without realising it (the "block" were triggered in my tests) and forgot to put it in the Makefile but would have needed a proper clang detection beforehand. Your last remark is a valid point, my local tests were simple but what you highlight are very likely to happen. So yes let put it at rest indeed, I agree. Regards. On Mon, 17 Jan 2022 at 17:05, Willy TARREAU wrote: > > Hi David, > > On Sat, Jan 08, 2022 at 07:30:24PM +, David CARLIER wrote: > > Hi > > > > Here a proposal to monitor memory pressure level on macOs to then > > trigger pools trimming when it is critical. > > > > Cheers. > > > > thanks. > > > > > > From 6b93fc00168a4e6ff80609ceb64582fea8d96ca0 Mon Sep 17 00:00:00 2001 > > From: David CARLIER > > Date: Sat, 8 Jan 2022 19:25:18 + > > Subject: [PATCH] MEDIUM: pool catching memory pressure level from the system > > on macOs. > > > > proposal to provide an additional trigger to relief the pressure on the > > pools on macOs, if HAProxy is under critical memory pressure via the > > dispatch API. > > For this one I have a few concerns: > > - on the build side, it uses some LLVM/Clang extensions ("blocks" API), > the test only uses defined(__BLOCKS__) which is possibly OK on modern > systems but I'd rather make sure we limit this test to LLVM only so > as to make sure that it's not inherited from something totally different > by accident; > > - on the build side again, my readings about the blocks API (which I > never heard about before) indicates that one has to pass -fblocks to > benefit from it, otherwise it's not used. Hence my understanding is > that this block remains excluded from the build. > > - on the maintenance side, I feel a bit concerned by the introduction > of exotic language extensions. Having to go through such an unusual > syntax just to call a function instead of passing a function pointer > looks inefficient at best (as it emits a dummy function that calls > the first one), and less maintainable, so as much as possible I'd > rather avoid this and just use a standard callback. > > - on the efficiency side, I'm a bit embarrassed. What do we define as > "critical" here ? How will users adjust the thresholds (if any) ? > How do we know that the preset thresholds will not cause extra > latencies by releasing memory too often for example ? Prior to 2.4 > depending on the build models, we used to call pool_gc() when facing > an allocation error, before trying again. Nowadays we do something > smarter, we monitor the average usage and spikes in each pool and > automatically release the cold areas, meaning that overall we use > less memory on varying loads, and are even less likely to salvage > extra memory if/when reaching a level considered critical. > > - last, we've faced deadlocks in the past between some pool_alloc() > and other blocks due to them being performed under thread isolation, > and here it makes me think that we could reintroduce random indirect > calls to thread_isolate() while in the process of allocating areas, > thus reintroduce the risk of potential deadlocks (i.e. when another > thread waits on thread_isolate and it cannot progress because it > waits on a lock we still hold). > > So I fear that there are more trouble to expect mid-term than benefits > to win. I don't know if you have metrics which show significant benefits > in using that that outweigh the issues above (especially the last one is > really not trivial to overcome), but for now I'd rather not include such > a change. > > Thanks, > Willy
Re: [EXTERNAL] [PATCH] BUILD/MINOR fix solaris build with clang
Mostly gcc 7.5 and clang 9 nowadays. Cheers. On Mon, 17 Jan 2022 at 16:43, Willy TARREAU wrote: > > On Thu, Jan 13, 2022 at 07:20:57PM +, David CARLIER wrote: > > Hi, > > > > here a little patch for solaris based systems. > > Thanks David, now applied. > > By the way, just out of curiosity, what do you use nowadays to build on > Solaris ? > > Willy
Re: [EXTERNAL] [PATCH] MEDIUM: pool monitor memory pressure on macOs
Hi David, On Sat, Jan 08, 2022 at 07:30:24PM +, David CARLIER wrote: > Hi > > Here a proposal to monitor memory pressure level on macOs to then > trigger pools trimming when it is critical. > > Cheers. > > thanks. > > > From 6b93fc00168a4e6ff80609ceb64582fea8d96ca0 Mon Sep 17 00:00:00 2001 > From: David CARLIER > Date: Sat, 8 Jan 2022 19:25:18 + > Subject: [PATCH] MEDIUM: pool catching memory pressure level from the system > on macOs. > > proposal to provide an additional trigger to relief the pressure on the > pools on macOs, if HAProxy is under critical memory pressure via the > dispatch API. For this one I have a few concerns: - on the build side, it uses some LLVM/Clang extensions ("blocks" API), the test only uses defined(__BLOCKS__) which is possibly OK on modern systems but I'd rather make sure we limit this test to LLVM only so as to make sure that it's not inherited from something totally different by accident; - on the build side again, my readings about the blocks API (which I never heard about before) indicates that one has to pass -fblocks to benefit from it, otherwise it's not used. Hence my understanding is that this block remains excluded from the build. - on the maintenance side, I feel a bit concerned by the introduction of exotic language extensions. Having to go through such an unusual syntax just to call a function instead of passing a function pointer looks inefficient at best (as it emits a dummy function that calls the first one), and less maintainable, so as much as possible I'd rather avoid this and just use a standard callback. - on the efficiency side, I'm a bit embarrassed. What do we define as "critical" here ? How will users adjust the thresholds (if any) ? How do we know that the preset thresholds will not cause extra latencies by releasing memory too often for example ? Prior to 2.4 depending on the build models, we used to call pool_gc() when facing an allocation error, before trying again. Nowadays we do something smarter, we monitor the average usage and spikes in each pool and automatically release the cold areas, meaning that overall we use less memory on varying loads, and are even less likely to salvage extra memory if/when reaching a level considered critical. - last, we've faced deadlocks in the past between some pool_alloc() and other blocks due to them being performed under thread isolation, and here it makes me think that we could reintroduce random indirect calls to thread_isolate() while in the process of allocating areas, thus reintroduce the risk of potential deadlocks (i.e. when another thread waits on thread_isolate and it cannot progress because it waits on a lock we still hold). So I fear that there are more trouble to expect mid-term than benefits to win. I don't know if you have metrics which show significant benefits in using that that outweigh the issues above (especially the last one is really not trivial to overcome), but for now I'd rather not include such a change. Thanks, Willy
Re: [EXTERNAL] [PATCH] BUILD/MINOR fix solaris build with clang
On Thu, Jan 13, 2022 at 07:20:57PM +, David CARLIER wrote: > Hi, > > here a little patch for solaris based systems. Thanks David, now applied. By the way, just out of curiosity, what do you use nowadays to build on Solaris ? Willy