Re: varnishtest with H2>HTX>H1(keep-alive)
Hi Pieter, On Wed, Nov 21, 2018 at 12:10:43AM +0100, PiBa-NL wrote: > I did notice there is one line regarding the 'double logging' I have got > configured though which I'm not sure is supposed to happen, its seems to be > because i'm having both stdout and :514 logging should that not be > possible?: > *** h1 0.0 debug|[ALERT] 323/233813 (5) : sendmsg()/writev() failed > in logger #2: Socket operation on non-socket (errno=38) > > Partial config: > global > log stdout format raw daemon > log :1514 local0 I'm not sure what you mean with "double logging". Above you're asking to send the log both to stdout and to local UDP port 1514. The sendmsg/writev error very likely is caused by the UDP one if nothing is listening locally. But I'm a bit surprised by the error since it's supposed to be a socket. I'll try to reproduce in case I messed up with the FDs when implementing this. Thanks! Willy
Re: HAProxy bytes in/bytes out stats are not updated
On Tue, Nov 20, 2018 at 09:44:41PM +0100, Lukas Tribus wrote: > Restart the process hard, limit the amount of time those old session > keep running by lowering timeouts and/or setting hard-stop-after. Indeed that's also a very good solution, it depends if it's acceptable to stop older sessions or not (which I don't know). Some protocols using long sessions (like RDP) support very well being disconnected and reconnected, it's even transparent for the user. For some other (like SSH) it's clearly not acceptable. But to be honest I've never understood why some people load-balance SSH :-) Here I don't know what it is. Willy
Re: varnishtest with H2>HTX>H1(keep-alive)
Hi Christopher, Willy, Op 20-11-2018 om 12:09 schreef Christopher Faulet: Hi, The H2 is not yet compatible with the HTX for now. So you should never use both in same time. However, this configuration error should be detected during the configuration parsing, to avoid runtime errors. Here is a patch to do so. I'll merge it. Thanks -- Christopher Faulet Thanks the 'old' config which tried to combine H2 and HTX is now rejected. (as expected) I guess i misinterpreted the 'need' for HTX for the H2 conversion and features which i thought would include the new keep-alive this version brings, but i guess those are separate things. Keepalive for a H1 backend coming from a H2 frontend works fine without using that option. New testcase attached, one that actually works! , regarding H2 > H1 with keepalive. (without HTX option though..) It shows as 'passed' when run. I did notice there is one line regarding the 'double logging' I have got configured though which I'm not sure is supposed to happen, its seems to be because i'm having both stdout and :514 logging should that not be possible?: *** h1 0.0 debug|[ALERT] 323/233813 (5) : sendmsg()/writev() failed in logger #2: Socket operation on non-socket (errno=38) Partial config: global log stdout format raw daemon log :1514 local0 I'm using "HA-Proxy version 1.9-dev7-7ff4f14 2018/11/20" this time. Or is it (again) something i'm configuring wrongly ? ;) . Regards, PiBa-NL (Pieter) # h2 with h1 backend connection reuse check varnishtest "h2 with h1 backend connection reuse check" feature ignore_unknown_macro #REQUIRE_VERSION=1.9 server s1 { rxreq txresp -gziplen 200 rxreq txresp -gziplen 200 } -start server s2 { stream 0 { rxsettings txsettings -ack } -run stream 1 { rxreq txresp -bodylen 200 } -run stream 3 { rxreq txresp -bodylen 200 } -run } -start server s3 -repeat 2 { rxreq txresp -gziplen 200 } -start server s4 { timeout 3 rxreq txresp -gziplen 200 rxreq txresp -gziplen 200 } -start server s5 { timeout 3 rxreq txresp -gziplen 200 rxreq txresp -gziplen 200 } -start haproxy h1 -conf { global #nbthread 3 log stdout format raw daemon log :1514 local0 stats socket /tmp/haproxy.socket level admin defaults mode http #option dontlog-normal log global option httplog timeout connect 3s timeout client 40s timeout server 40s listen fe1 bind "fd@${fe1}" server srv1 ${s1_addr}:${s1_port} listen fe3 bind "fd@${fe3}" proto h2 server srv3 ${s3_addr}:${s3_port} listen fe4 bind "fd@${fe4}" proto h2 server srv4 ${s4_addr}:${s4_port} listen fe5 bind "fd@${fe5}" ssl crt /usr/ports/net/haproxy-devel/test/common.pem alpn h2 server srv5 ${s5_addr}:${s5_port} } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/1" rxresp expect resp.status == 200 txreq -url "/2" rxresp expect resp.status == 200 } -start client c1 -wait client c2 -connect ${s2_sock} { stream 0 { txsettings -hdrtbl 0 rxsettings } -run stream 1 { txreq -req GET -url /3 rxresp expect resp.status == 200 } -run stream 3 { txreq -req GET -url /4 rxresp expect resp.status == 200 } -run } -start client c2 -wait client c3 -connect ${h1_fe3_sock} { stream 0 { txsettings -hdrtbl 0 rxsettings } -run stream 1 { txreq -req GET -url /3 rxresp expect resp.status == 200 } -run stream 3 { txreq -req GET -url /4 rxresp expect resp.status == 200 } -run } -start client c3 -wait client c4 -connect ${h1_fe4_sock} { stream 0 { txsettings -hdrtbl 0 rxsettings } -run stream 1 { txreq -req GET -url /3 rxresp expect resp.status == 200 } -run stream 3 { txreq -req GET -url /4 rxresp expect resp.status == 200 } -run } -start client c4 -wait shell { HOST=${h1_fe5_addr} if [ "${h1_fe5_addr}" = "::1" ] ; then HOST="\[::1\]" fi curl --http2 -i -k https://$HOST:${h1_fe5_port}/CuRLtesT_1/ https://$HOST:${h1_fe5_port}/CuRLtesT_2/ } server s1 -wait server s2 -wait server s3 -wait server s4 -wait server s5 -wait
Re: HAProxy bytes in/bytes out stats are not updated
Also I just noticed, when I reload HAProxy in master worker mode with SIGUSR2, stats stop get updated for already established sessions. I need to reestablish the sessions in order to see stat updates. Is this a desired behaviour? Or probably there is a way to fix this? Thanks! Regards, Sergey > On 20 Nov 2018, at 17:51, Willy Tarreau wrote: > > On Tue, Nov 20, 2018 at 05:35:14PM +0300, Sergey Arlashin wrote: >> Hi Willy, >> >> Thank you for the answer. I checked contstats and I see it is actually >> working. HAProxy - 1.8.1. >> Even small requests are reflected in the traffic stats. > > Ah you're right, I completely forgot I addressed this two years ago > with this commit : > > commit def0d22cc54229072a8abb6a850e6805208790f5 > Author: Willy Tarreau > Date: Tue Nov 8 22:03:00 2016 +0100 > >MINOR: stream: make option contstats usable again > >Quite a lot of people have been complaining about option contstats not >working correctly anymore since about 1.4. The reason was that one reason >for the significant performance boost between 1.3 and 1.4 was the ability >to forward data between a server and a client without waking up the stream >manager. And we couldn't afford to force sessions to constantly wake it >up given that most of the people interested in contstats are also those >interested in high performance transmission. >(...) > > It now forces the streams to wake up at least every 5 seconds to update > the counters. It's even documented for the option. Be careful that with a > large number of concurrent connections (hundreds of thousands) it can cause > an increase of CPU usage even when the connections are idle, just because > each of them will wake up every 5 seconds. But usually it's not a problem > if you're facing issues with jumps in stats. > > Great, I'm happy to have nothing to do and that something I did and did > not remember makes a user happy :-) > > Willy
Re: h1 buffer / confirmation
On Tue, Nov 20, 2018 at 05:09:00PM +0100, Christopher Faulet wrote: > Le 20/11/2018 à 17:06, David CARLIER a écrit : > > Hi Christopher (I think you maintain it), > > > > Just to confirm into src/h1_mux.c, line 131 you meant > > > > h1c->flags & *(H1C_F_CS_ERROR|H1C_F_CS_SHUTW*) > > > > instead ? > > > > Argh, Right ! Thanks for report. I'll fix that. I've just checked and couldn't find any other similar one (except in pseudo-code in comments but that doesn't count). Thanks David! Willy
Re: h1 buffer / confirmation
Le 20/11/2018 à 17:06, David CARLIER a écrit : Hi Christopher (I think you maintain it), Just to confirm into src/h1_mux.c, line 131 you meant h1c->flags & *(H1C_F_CS_ERROR|H1C_F_CS_SHUTW*) instead ? Argh, Right ! Thanks for report. I'll fix that. -- Christopher Faulet
h1 buffer / confirmation
Hi Christopher (I think you maintain it), Just to confirm into src/h1_mux.c, line 131 you meant h1c->flags & *(H1C_F_CS_ERROR|H1C_F_CS_SHUTW*) instead ? Kind regards.
Re: [ANNOUNCE] haproxy-1.9-dev4
Hi Dirkjan, On Tue, Nov 20, 2018 at 04:13:47PM +0100, Dirkjan Bussink wrote: > Hi Willy, > > > On 23 Oct 2018, at 14:48, Willy Tarreau wrote: > > > > You're right. I started backporting fixes for it last week. I think it > > would make sense to make one "soon" (maybe next week-end along dev5). > > In the mean time you can pick the latest maintenance snapshot if you > > want, it already contains your work. > > Could I ask again for a release of 1.8 as well? 1.9 has seen a few more dev > releases already and wondering if 1.8 can get one again? Indeed it's already been two months, it would be the right time to emit a new one. But at the moment all the people able to work on this are fully loaded finishing their respective parts for 1.9 (or fixing it). Are you missing a specific fix at the moment ? I'm asking because the 1.8 queue doesn't look huge and most of the recently fixed bugs in 1.9 do not affect 1.8. Regards, Willy
Re: [ANNOUNCE] haproxy-1.9-dev4
Hi Willy, > On 23 Oct 2018, at 14:48, Willy Tarreau wrote: > > You're right. I started backporting fixes for it last week. I think it > would make sense to make one "soon" (maybe next week-end along dev5). > In the mean time you can pick the latest maintenance snapshot if you > want, it already contains your work. Could I ask again for a release of 1.8 as well? 1.9 has seen a few more dev releases already and wondering if 1.8 can get one again? Cheers, Dirkjan
Re: HAProxy bytes in/bytes out stats are not updated
On Tue, Nov 20, 2018 at 05:35:14PM +0300, Sergey Arlashin wrote: > Hi Willy, > > Thank you for the answer. I checked contstats and I see it is actually > working. HAProxy - 1.8.1. > Even small requests are reflected in the traffic stats. Ah you're right, I completely forgot I addressed this two years ago with this commit : commit def0d22cc54229072a8abb6a850e6805208790f5 Author: Willy Tarreau Date: Tue Nov 8 22:03:00 2016 +0100 MINOR: stream: make option contstats usable again Quite a lot of people have been complaining about option contstats not working correctly anymore since about 1.4. The reason was that one reason for the significant performance boost between 1.3 and 1.4 was the ability to forward data between a server and a client without waking up the stream manager. And we couldn't afford to force sessions to constantly wake it up given that most of the people interested in contstats are also those interested in high performance transmission. (...) It now forces the streams to wake up at least every 5 seconds to update the counters. It's even documented for the option. Be careful that with a large number of concurrent connections (hundreds of thousands) it can cause an increase of CPU usage even when the connections are idle, just because each of them will wake up every 5 seconds. But usually it's not a problem if you're facing issues with jumps in stats. Great, I'm happy to have nothing to do and that something I did and did not remember makes a user happy :-) Willy
Re: HAProxy bytes in/bytes out stats are not updated
Hi Willy, Thank you for the answer. I checked contstats and I see it is actually working. HAProxy - 1.8.1. Even small requests are reflected in the traffic stats. Regards, Sergey > On 18 Nov 2018, at 20:47, Willy Tarreau wrote: > > Hi Sergey, > > On Sun, Nov 18, 2018 at 05:23:23PM +0300, Sergey Arlashin wrote: >> Hi! >> >> We have a TCP service that is load-balanced with HAProxy. It works pretty >> well, however the stats page doesn't seem to report correct traffic >> statistics. Even though we have data transferred all the time, stats show the >> same amount of bytes in/out. >> >> Our traffic if mainly long running TCP sessions that once are established, >> remain in ESTABLISHED state for a very long time. Probably it is somehow >> related? >> >> Can anyone please help me sole this issue? > > Stats are usually updated only at session termination. There is "option > contstats" to allow such counters to be updated upon each transfer, but > starting around 1.3.16 or so, it became less effective since it's only > performed at the upper layers while direct forwarding automatically > happens at much lower layers. With this said, with this option, an > update will be performed at least once every 2 GB, which I admit is > already not often enough for most use cases, but it's only a side effect > of the fact that we don't schedule more than 2 GB to be forwarded at once. > > At the moment I don't know what could be done to force these counters to > be updated more often. I suspect that it would be possible to implement > a dummy filter to force this to happen, which could possibly be a nice > option instead of a one-size-fits-all, but I'm not certain about this. > > If anyone else has an idea, I'm interested as well :-) > > Willy
Re: haproxy segfaults when clearing the input buffer via LUA
Le 17/11/2018 à 20:42, Willy Tarreau a écrit : Hi Moemen, On Wed, Nov 14, 2018 at 04:07:42PM +0100, Moemen MHEDHBI wrote: Hi, I was playing with LUA, to configure a traffic mirroring behavior. Basically I wanted HAProxy to send the http response of a request to a 3rd party before sending the response to the client. So this is the stripped down version of the script to reproduce the segfault with haproxy from the master branch: function mirror(txn) local in_len = txn.res:get_in_len() while in_len > 0 do response = txn.res:dup() -- sending response to 3rd party. txn.res:forward(in_len) core.yield() in_len = txn.res:get_in_len() end end core.register_action("mirror", { "http-res" }, mirror) Then I use this script via "http-response lua.mirror" I think problem here is that when I forward the response from the input buffer to the output buffer and hand processing back to HAProyx, the latter will try to send an invalid http request. The request is invalid because HAProxy did not have the opportunity to check the response and make sure there are valid headers because the input buffer is empty after the core.yield(). So I was expecting an error and HAProxy telling me that this is an invalid request but not a segfault. I can't tell for sure, but I totally agree it should never segfault, so at the very least we're missing a test. However I suspect there is a problem with the presence of the forward() call in your script, because by doing this you're totally bypassing the HTTP engine, so your script was called in an http context, it discretely stole the contents under the blanket, and went back to the http engine saying "I did nothing, it's not me!". The rest of the code continues to process the HTTP contents from the buffer where they are, resulting it quite a big mess. Ideally we should have a way to detect that parts of the buffer were moved on return and immediately send an error there. But there are some cases where it's valid if called using the HTTP API. So I don't know for sure how to detect such anomalies. Maybe buffer contents being smaller than the size of headers known by the parser would already be a good step forward. I remember Thierry recently had to try to strengthen a little bit such use cases where tcp was used from within HTTP. We'll definitely have to figure what the use cases are for this and to find a reliable solution to this because by definition it will not work anymore with HTX. There are two ways to avoid this by changing the script: 1/ Use mode tcp 2/ Use "get" and "send" instead of "forward", this way the LUA script will send the response directly to the client, instead of HAProxy doing that. It should still cause the same problem which is that the HTTP parser is totally bypassed and what you forward is not HTTP anymore, but bytes from the wire, and that you may even expect that the HTTP parser appends an error at some places and aborts if it discoveres the stream is mangled. I don't know if we can register filters from the Lua, but ideall that's what should be the best option in your case : having a Lua-based filter running on the data part would allow you to intercept the data stream for each chunk decoded by the HTTP parser. For the record, here is my old reply on a similar issue: https://www.mail-archive.com/haproxy@formilux.org/msg29571.html So, to be safe, don't use get/set/forward/send in HTTP without terminated the transaction with txn.done(). The Lua API must definitely be changed to be more restrictive in HTTP. When the LUA will be update to support the HTX representation, I'll see with Thierry how to clarify this point. -- Christopher
Re: varnishtest with H2>HTX>H1(keep-alive)
Le 20/11/2018 à 08:36, Frederic Lecaille a écrit : However that still doesn't work yet (as also already seen by Frederic): ** c4 0.2 === txreq -req GET -url /3 *** c4 0.2 tx: stream: 1, type: HEADERS (1), flags: 0x05, size: 37 ** c4 0.2 === rxresp *** h1 0.2 debug|0007:fe4.accept(000e)=0010 from [::1:13402] ALPN= h1 0.2 STDOUT poll 0x11 *** c4 0.2 HTTP2 rx EOF (fd:6 read: No error: 0) c4 0.2 could not get frame header ** c4 0.2 Ending stream 1 *** c4 0.2 closing fd 6 ** c4 0.2 Ending * top 0.2 RESETTING after ./PB-TEST/h2-keepalive-backend.vtc ** h1 0.2 Reset and free h1 haproxy 31909 ** h1 0.2 Wait ** h1 0.2 Stop HAproxy pid=31909 ** h1 0.2 WAIT4 pid=31909 status=0x008b (user 0.013928 sys 0.00) * h1 0.2 Expected exit: 0x0 signal: 0 core: 0 h1 0.2 Bad exit status: 0x008b exit 0x0 signal 11 core 128 * top 0.2 failure during reset # top TEST ./PB-TEST/h2-keepalive-backend.vtc FAILED (0.169) exit=2 Note that, on my side this crash is always reproducible if we use only c4 and s4 (after removing all the other c[1-3] and s[1-3] clients and servers). Hi, The H2 is not yet compatible with the HTX for now. So you should never use both in same time. However, this configuration error should be detected during the configuration parsing, to avoid runtime errors. Here is a patch to do so. I'll merge it. Thanks -- Christopher Faulet >From 46eef7144892ddb97fadfa76771092688d3b2e76 Mon Sep 17 00:00:00 2001 From: Christopher Faulet Date: Tue, 20 Nov 2018 11:23:52 +0100 Subject: [PATCH] BUG/MINOR: config: Be aware of the HTX during the check of mux protocols Because the HTX is still experimental, we must add special cases during the configuration check to be sure it is not enabled on a proxy with incompatible options. Here, for HTX proxies, when a mux protocol is specified on a bind line or a server line, we must force the HTX mode (PROTO_MODE_HTX). Concretely, H2 is the only mux protocol that can be forced. And it doesn't yet support the HTX. So forcing the H2 on an HTX proxy will always fail. --- src/cfgparse.c | 8 1 file changed, 8 insertions(+) diff --git a/src/cfgparse.c b/src/cfgparse.c index 2eb966377..2c660ab0d 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -3321,6 +3321,10 @@ out_uri_auth_compat: list_for_each_entry(bind_conf, >conf.bind, by_fe) { int mode = (1 << (curproxy->mode == PR_MODE_HTTP)); + /* Special case for HTX because it is still experimental */ + if (curproxy->options2 & PR_O2_USE_HTX) +mode = PROTO_MODE_HTX; + if (!bind_conf->mux_proto) continue; if (!(bind_conf->mux_proto->mode & mode)) { @@ -3335,6 +3339,10 @@ out_uri_auth_compat: for (newsrv = curproxy->srv; newsrv; newsrv = newsrv->next) { int mode = (1 << (curproxy->mode == PR_MODE_HTTP)); + /* Special case for HTX because it is still experimental */ + if (curproxy->options2 & PR_O2_USE_HTX) +mode = PROTO_MODE_HTX; + if (!newsrv->mux_proto) continue; if (!(newsrv->mux_proto->mode & mode)) { -- 2.17.2