Thank you for answer. My apologies. The first understanding of such activity was a mistake (
I've spend day for various testing of 1.9.7 and latest commits, prepared a bunch of logs and suggestions... then suddenly see everything in a different light. 1. Mentioned condition "!{ req.fhdr(host) -m found }" - just never works. Because when request doesn't have HOST-header, - Haproxy always fills it with interface's IP-address. In fact, my 403 for monitoring generates on next LUA-stage when "domain" like "127.0.0.1" is detected as invalid one. This oversight has directed my thought in wrong way and made me ignore possible LUA-problems. 2. In LUA I use global variables to store some processed data during request-stage. Looks like in NBTHREAD-mode LUA-environment is shared between threads. This combination lead to classic "racing conditions" case that is very difficult to achieve in synthetic tests. Still not completely sure and continue testing, but please don't spend your time further. P.s. I use keep-alive everywhere and now I haven't seen any problems with last snapshot in production. > Hi! > On Fri, May 24, 2019 at 11:30:03AM +0200, Lukas Tribus wrote: >> Hello Wer, >> >> On Sun, 19 May 2019 at 15:31, Wert <accp...@gmail.com> wrote: >> > >> > Hi >> > >> > Short: sometimes Haproxy ignores "http-request"-rules when NBTHREAD is in >> > use. >> > >> > Conditions: >> > 1. Haproxy 1.9.8 from source + Debian with 4.19 kernel >> > 2. Large config with thousands backends + 15MB Lua >> > 3. Significant (far from critical) load and extremely often reloads >> > 4. Everything works OK with NBPROC >> >> Can you confirm whether or not 1.9.7 has the same issue? > And similarly it would be interesting to know if the problem still > happens with the latest snapshot. And issue was found (and backported) > affecting the response, depending on how fast the read()==0 was received > on the response path, it was sometimes possible to disable all analysers > and let the response pass unanalyzed to the client. What I'm wondering > now is what consequences this can have on keep-alive requests. It might > very well be that the second request will directly be sent to the same > server without being analysed. The fact that this server goes to 127.0.0.1 > is an important factor, as connections+response can happen in a much > shorter time, which is one condition to trigger the issue. For those > interested, the relevant commit in 1.9 is : > commit db61c40482da208ea02ca81f5fd36c8269e06225 > Author: Olivier Houchard <ohouch...@haproxy.com> > Date: Tue May 21 17:43:50 2019 +0200 > BUG/MEDIUM: streams: Don't switch from SI_ST_CON to SI_ST_DIS on read0. > > When we receive a read0, and we're still in SI_ST_CON state (so on an > outgoing conneciton), don't immediately switch to SI_ST_DIS, or, we would > never call sess_establish(), and so the analysers will never run. > Instead, let sess_establish() handle that case, and switch to SI_ST_DIS if > we already have CF_SHUTR on the channel. > Cheers, > Willy