Re: [PATCH] add further tcp_info fetchers
> What do you think ? If you want, as a first step we can merge your patch > as-is with surrounding #ifdef __linux__ and drop the two parts that are > not compatible with 2.4. Probably that we'll have to think about dropping > support for linux < 2.6.something for version 1.8. I think this seems like the most reasonable solution for the time being. I agree in the long run it probably makes sense to have a wrapper around tcp_info to make things agnostic. -Joe On Wed, Aug 10, 2016 at 11:39 AM, Willy Tarreau wrote: > Hi again Joe, > > On Wed, Aug 10, 2016 at 05:36:16PM +0200, Willy Tarreau wrote: > > On Wed, Aug 10, 2016 at 07:11:44AM -0700, Joe Williams wrote: > > > Hello list, > > > > > > Adding on to Theirry's work ( > > > http://git.haproxy.org/?p=haproxy.git;a=commit;h= > 6310bef51148b747f9019bd0bd67fd285eff0ae3) > > > I have added a few more fetchers for counters based on the tcp_info > struct > > > maintained by the kernel. > > > > Thanks for this! As I told you initially I thought we wouldn't need > > the extra metrics, but you proved me wrong :-) > > > > I've merged it and added your comment as the commit message. However > > I was having a doubt about the presence of older fields on older kernels, > > so I gave it a try with linux-2.6.32 and glibc 2.3.6 and it failed on me > > like this : > > > > src/proto_tcp.c: In function `get_tcp_info': > > src/proto_tcp.c:2407: error: structure has no member named `tcpi_rcv_rtt' > > src/proto_tcp.c:2408: error: structure has no member named > `tcpi_total_retrans' > > make: *** [src/proto_tcp.o] Error 1 > > make: *** Waiting for unfinished jobs > > > > So I'm seeing two possibilities : > > - either you don't need these ones and we simply drop them from the > patch > > (the most likely solution given that total_retrans is meaningless in > HTTP > > since it applies to the whole connection) > > > > - or we find a way to detect them and disable them at build time (I'm > > looking at this now). > > > > Please let me know, I've not pushed the commit yet, and I'd admit that > > the first option still seems the easiest to me :-/ > > From what I'm seeing it also breaks FreeBSD where only Thierry's entries > are declared, and some of yours exist with a "__" in front of their name : > > clang -Iinclude -Iebtree -Wall -O2 -g -fno-strict-aliasing > -Wdeclaration-after-statement -DTPROXY -DCONFIG_HAP_CRYPT > -DENABLE_POLL -DENABLE_KQUEUE -DCONFIG_HAPROXY_VERSION=\"1.7-dev3-f5f03ef\" > -DCONFIG_HAPROXY_DATE=\"2016/08/10\" -c -o src/proto_tcp.o src/proto_tcp.c > src/proto_tcp.c:2392:34: error: use of undeclared identifier 'SOL_TCP' > if (getsockopt(conn->t.sock.fd, SOL_TCP, TCP_INFO, &info, &optlen) > == -1) > ^ > src/proto_tcp.c:2400:35: error: no member named 'tcpi_unacked' in 'struct > tcp_info'; did you mean '__tcpi_unacked'? > case 2: smp->data.u.sint = info.tcpi_unacked;break; > ^~~~ > __tcpi_unacked > /usr/include/netinet/tcp.h:212:12: note: '__tcpi_unacked' declared here > u_int32_t __tcpi_unacked; > ^ > src/proto_tcp.c:2401:35: error: no member named 'tcpi_sacked' in 'struct > tcp_info'; did you mean '__tcpi_sacked'? > case 3: smp->data.u.sint = info.tcpi_sacked; break; > ^~~ > __tcpi_sacked > /usr/include/netinet/tcp.h:213:12: note: '__tcpi_sacked' declared here > u_int32_t __tcpi_sacked; > ^ > src/proto_tcp.c:2402:35: error: no member named 'tcpi_lost' in 'struct > tcp_info'; did you mean '__tcpi_lost'? > case 4: smp->data.u.sint = info.tcpi_lost; break; > ^ > __tcpi_lost > /usr/include/netinet/tcp.h:214:12: note: '__tcpi_lost' declared here > u_int32_t __tcpi_lost; > ^ > src/proto_tcp.c:2403:35: error: no member named 'tcpi_retrans' in 'struct > tcp_info'; did you mean '__tcpi_retrans'? > case 5: smp->data.u.sint = info.tcpi_retrans;break; > ^~~~ >
Re: [PATCH] add further tcp_info fetchers
Willy, I think we can just drop those two from the patch. I'll be happy to have the rest. Thanks! -Joe On Wed, Aug 10, 2016 at 8:36 AM, Willy Tarreau wrote: > Hi Joe, > > On Wed, Aug 10, 2016 at 07:11:44AM -0700, Joe Williams wrote: > > Hello list, > > > > Adding on to Theirry's work ( > > http://git.haproxy.org/?p=haproxy.git;a=commit;h= > 6310bef51148b747f9019bd0bd67fd285eff0ae3) > > I have added a few more fetchers for counters based on the tcp_info > struct > > maintained by the kernel. > > Thanks for this! As I told you initially I thought we wouldn't need > the extra metrics, but you proved me wrong :-) > > I've merged it and added your comment as the commit message. However > I was having a doubt about the presence of older fields on older kernels, > so I gave it a try with linux-2.6.32 and glibc 2.3.6 and it failed on me > like this : > > src/proto_tcp.c: In function `get_tcp_info': > src/proto_tcp.c:2407: error: structure has no member named `tcpi_rcv_rtt' > src/proto_tcp.c:2408: error: structure has no member named > `tcpi_total_retrans' > make: *** [src/proto_tcp.o] Error 1 > make: *** Waiting for unfinished jobs > > So I'm seeing two possibilities : > - either you don't need these ones and we simply drop them from the patch > (the most likely solution given that total_retrans is meaningless in > HTTP > since it applies to the whole connection) > > - or we find a way to detect them and disable them at build time (I'm > looking at this now). > > Please let me know, I've not pushed the commit yet, and I'd admit that > the first option still seems the easiest to me :-/ > > Thanks, > Willy >
[PATCH] add further tcp_info fetchers
Hello list, Adding on to Theirry's work ( http://git.haproxy.org/?p=haproxy.git;a=commit;h=6310bef51148b747f9019bd0bd67fd285eff0ae3) I have added a few more fetchers for counters based on the tcp_info struct maintained by the kernel. Thanks. -Joe 0001-add-further-tcp-info-fetchers.patch Description: Binary data
lua api
List, I am trying to figure out how to use the new lua API. After reading https://raw.githubusercontent.com/yuxans/haproxy/master/doc/lua-api/index.rst it still isn't clear to me how to get the client IP of a connection. Is information about the socket available inside lua? If so, any suggestions on how to access it? I am hoping to get the IP address from each HTTP request and do some processing on it. Thanks! -Joe
Re: building haproxy with lua support
Thanks everyone, the patch worked perfectly for me. On Tue, Mar 17, 2015 at 6:36 AM, Thierry FOURNIER wrote: > On Tue, 17 Mar 2015 14:35:20 +0300 > Dmitry Sivachenko wrote: > > > > > > On 17 марта 2015 г., at 13:17, Thierry FOURNIER > wrote: > > > > > > On Tue, 17 Mar 2015 08:38:23 +0100 > > > Baptiste wrote: > > > > > >> On Tue, Mar 17, 2015 at 1:51 AM, Joe Williams > wrote: > > >>> List, > > >>> > > >>> I seem to be running into issues building haproxy with lua support > using > > >>> HEAD. Any thoughts? > > >>> > > >>> joe@ubuntu:~/haproxy$ make DEBUG=-ggdb CFLAGS=-O0 TARGET=linux2628 > > >>> USE_LUA=yes LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/ > LDFLAGS=-ldl > > >>> > > >>> /opt/lua53/lib//liblua.a(loadlib.o): In function `lookforfunc': > > >>> loadlib.c:(.text+0x502): undefined reference to `dlsym' > > >>> loadlib.c:(.text+0x549): undefined reference to `dlerror' > > >>> loadlib.c:(.text+0x576): undefined reference to `dlopen' > > >>> loadlib.c:(.text+0x5ed): undefined reference to `dlerror' > > >>> /opt/lua53/lib//liblua.a(loadlib.o): In function `gctm': > > >>> loadlib.c:(.text+0x781): undefined reference to `dlclose' > > >>> collect2: error: ld returned 1 exit status > > >>> make: *** [haproxy] Error 1 > > >>> > > >>> joe@ubuntu:~/haproxy$ /opt/lua53/bin/lua -v > > >>> Lua 5.3.0 Copyright (C) 1994-2015 Lua.org, PUC-Rio > > >>> > > >>> Thanks! > > >>> > > >>> -Joe > > > > > > > > > Thank you, > > > > > > In fact I build with the SSL activated, and the libssl is already > > > linked with thz dl library, so I don't sew this compilation error. > > > > > > It is fixed, the patch is in attachment. > > > > > > This patch will break FreeBSD (and other OSes) which do not have libdl. > > Hi, > > Thanks. Willy just fix this. now -ldl is implicit on LInx and it must be > activated or deactivated explicitely on other OS. > > Thierry >
building haproxy with lua support
List, I seem to be running into issues building haproxy with lua support using HEAD. Any thoughts? joe@ubuntu:~/haproxy$ make DEBUG=-ggdb CFLAGS=-O0 TARGET=linux2628 USE_LUA=yes LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/ LDFLAGS=-ldl /opt/lua53/lib//liblua.a(loadlib.o): In function `lookforfunc': loadlib.c:(.text+0x502): undefined reference to `dlsym' loadlib.c:(.text+0x549): undefined reference to `dlerror' loadlib.c:(.text+0x576): undefined reference to `dlopen' loadlib.c:(.text+0x5ed): undefined reference to `dlerror' /opt/lua53/lib//liblua.a(loadlib.o): In function `gctm': loadlib.c:(.text+0x781): undefined reference to `dlclose' collect2: error: ld returned 1 exit status make: *** [haproxy] Error 1 joe@ubuntu:~/haproxy$ /opt/lua53/bin/lua -v Lua 5.3.0 Copyright (C) 1994-2015 Lua.org, PUC-Rio Thanks! -Joe
Re: Patch for ALPN compatibility with OpenSSL development
It would be great to see this patch find it’s way into the next dev release. Let us know if any changes need to be made. -- Joe Williams williams@gmail.com WILLI567-ARIN On February 13, 2014 at 4:32:42 AM, Dirkjan Bussink (d.buss...@gmail.com) wrote: Hi all, At GitHub we’ve worked on a patch to make HAProxy’s ALPN code compatible with the patches for it that have landed in OpenSSL: http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f017a8f9db3a79f3a3406cf8d493ccd346db691 This final version is slightly different from what HAProxy currently expects, which is based on some custom OpenSSL patches. Let me know if this is a good approach towards fixing this problem or that it should be done differently. - Dirkjan Bussink - 0001-Use-ALPN-support-as-it-will-be-available-in-OpenSSL-.patch, 4.2 KB
Re: proxy protocol and websockets
Best I can tell this is specifically due to having http-server-close enabled in my defaults section. Commenting that out seems to fix this issue. I assume the connection gets killed just after the upgrade is completed and then the client is left hanging. -Joe -- Name: Joseph A. Williams Email: williams@gmail.com On Saturday, August 4, 2012 at 10:45 AM, joseph williams wrote: > > On Aug 3, 2012, at 9:33 PM, Willy Tarreau mailto:w...@1wt.eu)> > wrote: > > > Hi Joe, > > > > On Fri, Aug 03, 2012 at 03:54:35PM -0700, Joe Williams wrote: > > > List, > > > > > > I am attempting to setup stud, haproxy (1.5-dev7) and a backend web > > > sockets > > > server using proxy protocol to communicate between stud and haproxy. It > > > seems > > > like my requests are making it to the backend server but the client never > > > receives anything. > > > > > > This is the only thing I ever see in the logs: > > > > > > Aug 3 22:14:36 10.178.2.72 haproxy[13312]: IP:49494 > > > [03/Aug/2012:22:14:36.608] http-proxy websocket/host 94/0/0/2/96 101 148 > > > - - 12/12/0/0/0 0/0 "GET /streaming/handshake HTTP/1.1" > > > > 148 bytes for a handshake response seem very short (though possible). > > I see nothing abnormal in your config. Could you take a capture of the > > response handshake ? > > > > When you say that not passing via haproxy works, does this mean that > > you're forwarding from stud to the server directly ? > > > > > I did some playing around and was able to make a couple different working > configurations using haproxy by itself. At this point I think I have narrowed > it down to stud and/or using proxy protocol. I'll do some more testing and > reply back with results soon. > > -Joe
proxy protocol and websockets
List, I am attempting to setup stud, haproxy (1.5-dev7) and a backend web sockets server using proxy protocol to communicate between stud and haproxy. It seems like my requests are making it to the backend server but the client never receives anything. This is the only thing I ever see in the logs: Aug 3 22:14:36 10.178.2.72 haproxy[13312]: IP:49494 [03/Aug/2012:22:14:36.608] http-proxy websocket/host 94/0/0/2/96 101 148 - - 12/12/0/0/0 0/0 "GET /streaming/handshake HTTP/1.1" Here are my ACLs: acl websocket hdr(Upgrade) -i websocket acl websocket_host hdr(host) -i ws.blah.com use_backend websocket if websocket or websocket_host My backend: backend websocket balance source option forwardfor timeout queue 5000 timeout server 8640 timeout connect 8640 server host host:8080 weight 1 maxconn 5000 check Defaults: defaults mode http log global option httplog monitor-uri /_haproxy_health_check option dontlognull option log-health-checks option log-separate-errors retries 3 option redispatch maxconn 16384 timeout connect 5000 timeout client 5 timeout server 5 option http-server-close Talking directly to the backend server works correctly while going through haproxy does not. Any ideas about what might be going on? -Joe -- Name: Joseph A. Williams Email: williams@gmail.com
halog manpage
Does a halog man page exist? If not, it would be great if someone who knows what all the options are could document all of them. The best reference I know of is the following thread, which does not include many of the newer filters and etc. http://www.mail-archive.com/haproxy@formilux.org/msg02962.html Thanks! -Joe -- Name: Joseph A. Williams Email: williams@gmail.com
Re: status of master/worker model in 1.5
It would be great to see these changes merged in. Willy, any thoughts? -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe On Tuesday, September 27, 2011 at 6:28 PM, Simon Horman wrote: > On Mon, Sep 19, 2011 at 12:13:14PM -0400, Adam Kocoloski wrote: > > Hi all, Simon Horman posted a patch set back in March that enabled > > haproxy to reload its configuration without refusing connections. I > > don't think the patches have been merged yet -- are they on track for > > the 1.5 release? Regards, > > > > > Hi Adam, > > as I understand things, Willy still has a few concerns with those > changes that we are yet to work through. > >
Re: 1.5 status
On Mar 1, 2011, at 12:46 AM, Willy Tarreau wrote: > On Tue, Mar 01, 2011 at 03:30:08PM +0800, Delta Yeh wrote: >> Hi Willy, >> >> Do you have any plan to add http compress feature into haproxy ? > > Yes, we'll probably implement it here at Exceliance once we're done > with SSL. The internal reworks needed to address SSL are the same as > compression. For a long time I've been against compression because > of the added latency that freezes the whole process while compressing > a buffer. With nowadays processors, compressing a 16kB buffer should > take less than a millisecond and will not slow the whole process down > too much. Also, the internal scheduler supports priorities so we can > lower the one of the compressing tasks. > >> And what is the status of SSL feature? I read a post >> on the status of SSL , in 2009 ? or maybe early 2010. > > The devs should start here in a few months and take several months. Thanks for all the details Willy, glad to hear things are easing up for you. :) -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
1.5 status
Willy and list, I am curious of the development status of 1.5. It looks like there have been some recent commits but no dev release since dev3 months ago. Thoughts? -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: url based sticky sessions
On Jan 10, 2011, at 5:06 PM, David Cournapeau wrote: > On Tue, Jan 11, 2011 at 8:40 AM, Joe Williams wrote: >> >> List, >> >> Is it possible to setup an ACL that will any client that hits a specific uri >> will be sticky and subsequent requests will be routed to the same backend >> server? Note that I am using "option http-server-close" to force each >> request to hit a different server on the backend, I would like to continue >> to do this for all but one uri. > > Do you mean the whole URI or only part of it ? There is some code in > the dev version of haproxy to stick session relatively to url > parameter, but nothing else related to the url yet as far as I know. path_beg is good enough for me in this case. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: url based sticky sessions
Did a quick test and it seems "option http-server-close" always trumps the stick table and stick match. Do I have any options here? Thanks. -Joe On Jan 10, 2011, at 4:02 PM, Joe Williams wrote: > Looking deeper it seems like stick tables might work here. Will these > conflict with "option http-server-close"? > > -Joe > > > On Jan 10, 2011, at 3:40 PM, Joe Williams wrote: > >> >> List, >> >> Is it possible to setup an ACL that will any client that hits a specific uri >> will be sticky and subsequent requests will be routed to the same backend >> server? Note that I am using "option http-server-close" to force each >> request to hit a different server on the backend, I would like to continue >> to do this for all but one uri. >> >> Thanks. >> -Joe >> >> >> >> Name: Joseph A. Williams >> Email: j...@joetify.com >> Blog: http://www.joeandmotorboat.com/ >> Twitter: http://twitter.com/williamsjoe >> >> > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: url based sticky sessions
Looking deeper it seems like stick tables might work here. Will these conflict with "option http-server-close"? -Joe On Jan 10, 2011, at 3:40 PM, Joe Williams wrote: > > List, > > Is it possible to setup an ACL that will any client that hits a specific uri > will be sticky and subsequent requests will be routed to the same backend > server? Note that I am using "option http-server-close" to force each request > to hit a different server on the backend, I would like to continue to do this > for all but one uri. > > Thanks. > -Joe > > > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
url based sticky sessions
List, Is it possible to setup an ACL that will any client that hits a specific uri will be sticky and subsequent requests will be routed to the same backend server? Note that I am using "option http-server-close" to force each request to hit a different server on the backend, I would like to continue to do this for all but one uri. Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: disable-on-404 and tracking
On Dec 8, 2010, at 3:53 PM, Willy Tarreau wrote: > Hi Joe, > > On Mon, Dec 06, 2010 at 03:19:36PM -0800, Joe Williams wrote: >> >> On Dec 6, 2010, at 2:45 PM, Bryan Talbot wrote: >> >>> I worked around this issue by including the "option httpchk" in the >>> backend but never using the "check" option for the servers in that >>> backend that are tracked. The server lines do contain the "track" >>> option. >>> >>> >>> backend be1 >>> balance roundrobin >>> http-check disable-on-404 >>> option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck >>> server 1.2.3.4 1.2.3.4:80 check >>> >>> backend be2 >>> balance roundrobin >>> http-check disable-on-404 >>> option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck >>> server 1.2.3.4 1.2.3.4:80 track be1/1.2.3.4 >> >> >> Thanks Bryan, that should hold me over for now. >> >> This seems like a bug IMHO, track should cause the backend to "inherit" >> http-check disable-on-404 from the main backend. > > I just glanced over that thread and I agree we should get this fixed one > way or another. It's not really a bug but a side effect of dependencies > between features. > > At the very least, we should document the matrix of all tracked/tracking > modes with their possible options (even if we explicitly have to enable > httpchk and disable-on-404 in both backends for whatever reason). Cool, I'm glad we agree. :) I'm happy to work on a patch if you have time to give me some guidance. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: disable-on-404 and tracking
On Dec 6, 2010, at 2:45 PM, Bryan Talbot wrote: > I worked around this issue by including the "option httpchk" in the > backend but never using the "check" option for the servers in that > backend that are tracked. The server lines do contain the "track" > option. > > > backend be1 >balance roundrobin >http-check disable-on-404 >option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck >server 1.2.3.4 1.2.3.4:80 check > > backend be2 >balance roundrobin >http-check disable-on-404 >option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck >server 1.2.3.4 1.2.3.4:80 track be1/1.2.3.4 Thanks Bryan, that should hold me over for now. This seems like a bug IMHO, track should cause the backend to "inherit" http-check disable-on-404 from the main backend. -Joe > On Mon, Dec 6, 2010 at 10:51 AM, Joe Williams wrote: >> Just to add some info to this thread, I did some testing and I get some >> combination of the following errors depending on where (default, backends, >> etc) I have the disable-on-404 directive. >> >> config : 'disable-on-404' will be ignored for backend 'test' (requires >> 'option httpchk'). >> config : backend 'test', server 'test': unable to use joe/node001 for >> tracing: disable-on-404 option inconsistency. >> config : 'disable-on-404' will be ignored for frontend 'http_proxy' >> (requires 'option httpchk'). >> >> I assume this is by design for some reason but certainly seems like a >> desirable feature. Can anyone point me in the right direction regarding a >> writing a patch to "fix" it? >> >> Thanks. >> -Joe >> >> >> On Dec 6, 2010, at 8:55 AM, Joe Williams wrote: >> >>> Anyone have any thoughts? Is it possible to use tracking and disable-on-404 >>> together? >>> >>> -Joe >>> >>> >>> On Dec 2, 2010, at 3:41 PM, Joe Williams wrote: >>> >>>> >>>> On Dec 2, 2010, at 2:28 PM, Krzysztof Olędzki wrote: >>>> >>>>> On 2010-12-02 21:28, Joe Williams wrote: >>>>>> >>>>>> List, >>>>>> >>>>>> I am attempting to enable the disable-on-404 option on only the >>>>>> backends that other backends track. It seems that the secondary >>>>>> backends do not like this and error out saying it is "inconsistent" >>>>>> even if disable-on-404 is only enabled in the backend that they >>>>>> track. Is it possible to have disable-on-404 without httpchk in each >>>>>> backend? >>>>> >>>>> Yes, you need to enable disable-on-404 on both tracked and tracking >>>>> backends. >>>> >>>> Doesn't that also mean that I have to enable httpchk on all those backends >>>> as well? >>>> >>>> -Joe >>>> >>>> >>>> Name: Joseph A. Williams >>>> Email: j...@joetify.com >>>> Blog: http://www.joeandmotorboat.com/ >>>> Twitter: http://twitter.com/williamsjoe >>>> >>>> >>> >>> Name: Joseph A. Williams >>> Email: j...@joetify.com >>> Blog: http://www.joeandmotorboat.com/ >>> Twitter: http://twitter.com/williamsjoe >>> >>> >> >> Name: Joseph A. Williams >> Email: j...@joetify.com >> Blog: http://www.joeandmotorboat.com/ >> Twitter: http://twitter.com/williamsjoe >> >> >> Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: stunnel patch updates
On Dec 6, 2010, at 1:31 PM, Cyril Bonté wrote: > Hi Joe, > Le lundi 4 octobre 2010 21:42:09, Joe Williams a écrit : > > Anyone have updated patches for stunnel 4.34, specifically for the listen > > queue length and X-Forwarded-For? The patches on the haproxy site don't > > seem to work. > I don't know if you still need them, but as I'll also need them soon, I've > rediffed both patches. > You'll find in attachment : > - stunnel-4.34-listen-queue.diff > - stunnel-4.34-xforwared-for.diff > Hope this helps. > -- > Cyril Bonté > Thanks Cyril, these will be handy. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: disable-on-404 and tracking
Just to add some info to this thread, I did some testing and I get some combination of the following errors depending on where (default, backends, etc) I have the disable-on-404 directive. config : 'disable-on-404' will be ignored for backend 'test' (requires 'option httpchk'). config : backend 'test', server 'test': unable to use joe/node001 for tracing: disable-on-404 option inconsistency. config : 'disable-on-404' will be ignored for frontend 'http_proxy' (requires 'option httpchk'). I assume this is by design for some reason but certainly seems like a desirable feature. Can anyone point me in the right direction regarding a writing a patch to "fix" it? Thanks. -Joe On Dec 6, 2010, at 8:55 AM, Joe Williams wrote: > Anyone have any thoughts? Is it possible to use tracking and disable-on-404 > together? > > -Joe > > > On Dec 2, 2010, at 3:41 PM, Joe Williams wrote: > >> >> On Dec 2, 2010, at 2:28 PM, Krzysztof Olędzki wrote: >> >>> On 2010-12-02 21:28, Joe Williams wrote: >>>> >>>> List, >>>> >>>> I am attempting to enable the disable-on-404 option on only the >>>> backends that other backends track. It seems that the secondary >>>> backends do not like this and error out saying it is "inconsistent" >>>> even if disable-on-404 is only enabled in the backend that they >>>> track. Is it possible to have disable-on-404 without httpchk in each >>>> backend? >>> >>> Yes, you need to enable disable-on-404 on both tracked and tracking >>> backends. >> >> Doesn't that also mean that I have to enable httpchk on all those backends >> as well? >> >> -Joe >> >> >> Name: Joseph A. Williams >> Email: j...@joetify.com >> Blog: http://www.joeandmotorboat.com/ >> Twitter: http://twitter.com/williamsjoe >> >> > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: disable-on-404 and tracking
Anyone have any thoughts? Is it possible to use tracking and disable-on-404 together? -Joe On Dec 2, 2010, at 3:41 PM, Joe Williams wrote: > > On Dec 2, 2010, at 2:28 PM, Krzysztof Olędzki wrote: > >> On 2010-12-02 21:28, Joe Williams wrote: >>> >>> List, >>> >>> I am attempting to enable the disable-on-404 option on only the >>> backends that other backends track. It seems that the secondary >>> backends do not like this and error out saying it is "inconsistent" >>> even if disable-on-404 is only enabled in the backend that they >>> track. Is it possible to have disable-on-404 without httpchk in each >>> backend? >> >> Yes, you need to enable disable-on-404 on both tracked and tracking >> backends. > > Doesn't that also mean that I have to enable httpchk on all those backends as > well? > > -Joe > > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: disable-on-404 and tracking
On Dec 2, 2010, at 2:28 PM, Krzysztof Olędzki wrote: > On 2010-12-02 21:28, Joe Williams wrote: >> >> List, >> >> I am attempting to enable the disable-on-404 option on only the >> backends that other backends track. It seems that the secondary >> backends do not like this and error out saying it is "inconsistent" >> even if disable-on-404 is only enabled in the backend that they >> track. Is it possible to have disable-on-404 without httpchk in each >> backend? > > Yes, you need to enable disable-on-404 on both tracked and tracking > backends. Doesn't that also mean that I have to enable httpchk on all those backends as well? -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
disable-on-404 and tracking
List, I am attempting to enable the disable-on-404 option on only the backends that other backends track. It seems that the secondary backends do not like this and error out saying it is "inconsistent" even if disable-on-404 is only enabled in the backend that they track. Is it possible to have disable-on-404 without httpchk in each backend? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
CL error flag
List, I am seeing some 'CL' error flags in my logs, what does this one mean? Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
stats page errors column
List, I didn't immediately see this in the docs. What types of errors (CD, sQ, etc) are included in the "error" column labeled as "conn" and "resp" on the haproxy stats page? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
config reload time gap and dropped requests
List, I am experiencing a gap between when the old process stops listening and the new process starts were requests fail. AFAICT this is not a new issue rather we just started to notice it with increased number of requests and we found we can readily reproduce it. My understanding is that this is likely the time between when the SIGTTOU is sent to the old process and the new one started. This is probably milliseconds but we are definitely seeing dropped connections. It doesn't seem to me that having multiple haproxy processes would help in this case unless the reloads to each process are staggered. Does anyone else see the same issue? Are there work arounds available? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: kill existing connections immediately on failed health check
Awesome, thanks Willy. I look forward to testing it out. -Joe On Oct 8, 2010, at 10:04 AM, Willy Tarreau wrote: > Hi Joe, > > On Fri, Oct 08, 2010 at 07:37:20AM -0700, Joe Williams wrote: > (...) >>> Thus we could have : >>> >>> timeout server 5m >>> timeout dead-server 10s >>> >>> I'm not saying it would be easier (it would not in my opinion), but it would >>> provide much cleaner results. What do you think ? >> >> >> >> Willy, >> >> I think that should work, my issue is definitely long polling requests as >> you suggested. The only thing in addition to this would be to retry the >> request but failing faster might be good enough for now. As I mentioned in >> my second email doing ACL based timeouts would also work for me since I know >> which urls I expect to take a long time to return. > > OK. Concerning the ACL-based timeouts, I have added that to the 1.5 roadmap, > because I think that it's not too hard to do if we only support changing the > server timeout (others are useless in this case). > > Cheers, > Willy > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: kill existing connections immediately on failed health check
On Oct 7, 2010, at 2:48 PM, Willy Tarreau wrote: > Hi Joe, > > On Mon, Oct 04, 2010 at 03:46:34PM -0700, Joe Williams wrote: >> >> Is it possible for haproxy to basically kill (and/or retry) established >> backend connections for a failed backend server as soon as it fails a >> content check? Basically I have some long running requests that are expected >> to hang, in some cases the server they are connected to goes unavailable, >> failing the health check, but the connection sits until the timeout is >> reached while no new connections are routed to it. Ideally I would be able >> to keep my high server timeout but have those connections closed and retried >> (similar to redispatch/retry) if there was a health check failure after they >> were established. From what I can tell redispatch and retries don't cover >> this case. Thoughts? > > That's not possible at all and it would not be easy to do that because > at the moment there's no list of per-server connections. Also, it could > cause more issues than it would solve because we would often kill perfectly > working connections. Very often when an application server does not respond > to health checks, it basically does not accept new connections but still > processes existing ones. > > This is something which comes back from time to time, due to long polling > requests (mainly). I'm thinking that instead of actively killing connections, > maybe we should focus in shortening their timeouts. That way, working > connections are not killed and idle ones quickly disappear. And this would > also work for plain TCP (where this issue is often present too). And it > would also avoid killing all connections too fast when a server just > experiences a hickup. > > Thus we could have : > > timeout server 5m > timeout dead-server 10s > > I'm not saying it would be easier (it would not in my opinion), but it would > provide much cleaner results. What do you think ? Willy, I think that should work, my issue is definitely long polling requests as you suggested. The only thing in addition to this would be to retry the request but failing faster might be good enough for now. As I mentioned in my second email doing ACL based timeouts would also work for me since I know which urls I expect to take a long time to return. Thanks! -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: stunnel patch updates
Here's an updated listen queue depth patch for stunnel 4.32 -Joe On Oct 4, 2010, at 1:09 PM, Jim Riggs wrote: > On Oct 4, 2010, at 2:42 PM, Joe Williams wrote: > >> Anyone have updated patches for stunnel 4.34, specifically for the listen >> queue length and X-Forwarded-For? The patches on the haproxy site don't seem >> to work. > > > Attached is an updated version of the xforwardedfor patch that I use for > 4.32. I haven't tried it with 4.34 yet... > > stunnel-4.32-listen-queue.diff Description: Binary data Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
x-forwarded-for logging
I applied the x-forwarded-for patch to stunnel in hopes that haproxy would log the forwarded for address but it doesn't seem to. Is this possible? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: kill existing connections immediately on failed health check
One other thought I had which is probably more complicated is timeouts based on ACLs. Specifically so I can set the timeout on specific URLs to be longer than the default. I think both features would be useful but either of them might be good enough to get around the issues I'm having. -Joe On Oct 4, 2010, at 3:46 PM, Joe Williams wrote: > > Is it possible for haproxy to basically kill (and/or retry) established > backend connections for a failed backend server as soon as it fails a content > check? Basically I have some long running requests that are expected to hang, > in some cases the server they are connected to goes unavailable, failing the > health check, but the connection sits until the timeout is reached while no > new connections are routed to it. Ideally I would be able to keep my high > server timeout but have those connections closed and retried (similar to > redispatch/retry) if there was a health check failure after they were > established. From what I can tell redispatch and retries don't cover this > case. Thoughts? > > Thanks. > > -Joe > > > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
kill existing connections immediately on failed health check
Is it possible for haproxy to basically kill (and/or retry) established backend connections for a failed backend server as soon as it fails a content check? Basically I have some long running requests that are expected to hang, in some cases the server they are connected to goes unavailable, failing the health check, but the connection sits until the timeout is reached while no new connections are routed to it. Ideally I would be able to keep my high server timeout but have those connections closed and retried (similar to redispatch/retry) if there was a health check failure after they were established. From what I can tell redispatch and retries don't cover this case. Thoughts? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
stunnel patch updates
Anyone have updated patches for stunnel 4.34, specifically for the listen queue length and X-Forwarded-For? The patches on the haproxy site don't seem to work. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
syslog hostnames
From what I can tell haproxy does not include the hostname with the syslog messages it sends. Additionally I don't see this as a configurable option. This is causing my syslog server to do reverse dns lookups to get a hostname. Is it possible to set the hostname? If not is this a feature that can be added? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: Tt of +0 without logasap
Ha! You're right, must be Friday afternoon. :D -Joe On Sep 17, 2010, at 2:01 PM, Cyril Bonté wrote: > Hi, > > Le vendredi 17 septembre 2010 22:27:17, Joe Williams a écrit : >> I am not using logasap and am seeing response times like >> "1903/1903/0/1/+0". From the docs it sounds like this should only happen >> with logasap. Any ideas? > > Can you provide a full log line (hide the sensitive data) ? > I wonder if you're not looking at the wrong fields (actconn '/' feconn '/' > beconn '/' srv_conn '/' retries*) > > -- > Cyril Bonté > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Tt of +0 without logasap
I am not using logasap and am seeing response times like "1903/1903/0/1/+0". From the docs it sounds like this should only happen with logasap. Any ideas? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
hanging in syn_sent
Anyone ever seen connections to haproxy hang in a syn_sent state and then fail while other connections (to/from the same hosts) work perfectly fine? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: PD error code
Thanks guys, missed that in the docs. -Joe On Sep 3, 2010, at 12:49 PM, Cyril Bonté wrote: > Le vendredi 3 septembre 2010 20:59:35, Bryan Talbot a écrit : >> Section 8.5 of the doc ( >> http://haproxy.1wt.eu/download/1.3/doc/configuration.txt) says: >> >> - On the first character, a code reporting the first event which caused the >>session to terminate : >> >> >> P : the session was prematurely aborted by the proxy, because of a >>connection limit enforcement, because a DENY filter was >> matched, because of a security check which detected and blocked a >> dangerous error in server response which might have caused information >> leak (eg: cacheable cookie), or because the response was processed by the >> proxy (redirect, stats, etc...). >> >> >> - on the second character, the TCP or HTTP session state when it was closed >> : >> >> D : the session was in the DATA phase. > > Since HAProxy 1.4, this combination can mean (maybe not exhaustive) : > - your server replied with an invalid chunked transfert encoding (wrong size > in at least one chunk, bad chunk delimitation, ...) > - there's a missing \n after a \r (invalid CRLF) in the data part (chunks, > trailers, ...) > > Hope this helps. > > -- > Cyril Bonté > > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: PD error code
Anyone know what this one could be? -Joe On Sep 1, 2010, at 10:35 AM, Joe Williams wrote: > > I've seen a few "PD" error codes in my logs but don't see it mentioned in the > docs. What does this flag stand for? > > Thanks. > -Joe > > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
PD error code
I've seen a few "PD" error codes in my logs but don't see it mentioned in the docs. What does this flag stand for? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: halog feature
Willy, I think it would be really handy for me to have. I need to keep track of any naughty clients and backend nodes. -Joe On Aug 28, 2010, at 10:50 AM, Willy Tarreau wrote: > Hi Joe, > > On Fri, Aug 27, 2010 at 12:59:07PM -0700, Joe Williams wrote: >> >> Is it possible to use halog to get a distribution of error codes (RC, cH, >> CH, CQ, CT, etc) in the same way the "-st" switch works? If not, is this >> something that could be added? > > no right now it's not implemented, but it would not be too hard, it > would basically be a transcoding of these codes to numeric codes. > But do you think it would be *that* useful ? In fact I have no idea. > > Cheers, > Willy > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
halog feature
Is it possible to use halog to get a distribution of error codes (RC, cH, CH, CQ, CT, etc) in the same way the "-st" switch works? If not, is this something that could be added? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: request introspection
It looks like "show errors" isn't returning anything and I am definitely seeing PR's in the logs. Aug 26 14:45:54 HOST haproxy[20549]: IP:26591 [26/Aug/2010:14:45:53.438] http_proxy http_proxy/ -1/-1/-1/-1/926 400 187 - - PR-- 1093/1093/0/0/0 0/0 "" Any ideas? -Joe On Aug 26, 2010, at 7:38 AM, Joe Williams wrote: > > > On Aug 25, 2010, at 11:13 PM, Willy Tarreau wrote: > >> Hi Joe, >> >> On Wed, Aug 25, 2010 at 03:46:06PM -0700, Joe Williams wrote: >>> >>> Is there anyway to look deeper into erroneous requests? Preferably having >>> haproxy log more details in the cases of something like a PR (400 status >>> code). I have some naughty clients and want to see what haproxy is seeing >>> and why it determines a request as PR. If haproxy doesn't have a facility >>> for this anyone have suggestions on tools (better than tcpdump?) to get >>> this information? >> >> yes there's already something for that. Connect to your stats socket >> using socat and issue "show errors". You will see a precise dump of >> the last invalid request and invalid response for each frontend/backend, >> with a pointer to the first faulty character. Example : >> >> $ echo "show errors" | socat stdio unix-connect:/var/run/haproxy.stat > > > Duh, for whatever reason it didn't occur to me to use the stats socket. > > As always thanks! > > -Joe > > > > Name: Joseph A. Williams > Email: j...@joetify.com > Blog: http://www.joeandmotorboat.com/ > Twitter: http://twitter.com/williamsjoe > > Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: request introspection
On Aug 25, 2010, at 11:13 PM, Willy Tarreau wrote: > Hi Joe, > > On Wed, Aug 25, 2010 at 03:46:06PM -0700, Joe Williams wrote: >> >> Is there anyway to look deeper into erroneous requests? Preferably having >> haproxy log more details in the cases of something like a PR (400 status >> code). I have some naughty clients and want to see what haproxy is seeing >> and why it determines a request as PR. If haproxy doesn't have a facility >> for this anyone have suggestions on tools (better than tcpdump?) to get this >> information? > > yes there's already something for that. Connect to your stats socket > using socat and issue "show errors". You will see a precise dump of > the last invalid request and invalid response for each frontend/backend, > with a pointer to the first faulty character. Example : > > $ echo "show errors" | socat stdio unix-connect:/var/run/haproxy.stat Duh, for whatever reason it didn't occur to me to use the stats socket. As always thanks! -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
request introspection
Is there anyway to look deeper into erroneous requests? Preferably having haproxy log more details in the cases of something like a PR (400 status code). I have some naughty clients and want to see what haproxy is seeing and why it determines a request as PR. If haproxy doesn't have a facility for this anyone have suggestions on tools (better than tcpdump?) to get this information? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: unexpected results with hdr_beg
Willy, thanks for the reply. I don't think that "-i" had anything to do with it as I have many ACLs any only two (one good, one erroneous) hosts that I know of were getting matched and routed to these servers. I was using curl to test so I don't think capitalization was an issue there. I'll see if I can reproduce the scenario and let you know what I can find. -Joe On 4/23/10 12:16 PM, Willy Tarreau wrote: Hi Joe, On Fri, Apr 23, 2010 at 09:26:56AM -0700, Joe Williams wrote: We had a case this morning where an ACL using hdr_beg(host) was matching an host header that didn't actually match afaict. Example: Requests to http://xyz.blah.com and http://abc.blah.com both were getting routed based on the following ACL when only one should have been. acl xyz hdr_beg(host) xyz. or xyz1. After changing the ACL to the following everything worked as expected. acl xyz hdr(host) -i xyz.blah.com Are you sure it's the hdr_beg() and not the "-i" which made the difference ? Internet host names are case insensitive, so it is possible that your request was sent with some upper cases which were not matched by the first line above but was OK by the second one. I am running haproxy 1.3.23 and will also mention my haproxy configuration is *very* large (hundreds of backends and ACLs), if that could have anything to do with it. That's unrelated. If you told me that removing one line changed anything I could have doubted, but here you just replaced one ACL match with another one, so it's not a matter of size. BTW, this week I received a config with 52000 ACLs and as many use_backend rules. The only thing I know about it was that it was said to be too slow where it was used. Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
unexpected results with hdr_beg
We had a case this morning where an ACL using hdr_beg(host) was matching an host header that didn't actually match afaict. Example: Requests to http://xyz.blah.com and http://abc.blah.com both were getting routed based on the following ACL when only one should have been. acl xyz hdr_beg(host) xyz. or xyz1. After changing the ACL to the following everything worked as expected. acl xyz hdr(host) -i xyz.blah.com I am running haproxy 1.3.23 and will also mention my haproxy configuration is *very* large (hundreds of backends and ACLs), if that could have anything to do with it. Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: sL flag
Looks like all the most recent sL's I have seen have been GET requests. Thanks. -Joe On 2/27/10 10:08 AM, Joe Williams wrote: Willy, This was on 1.3.23, it might have been a POST I will need to go back through the logs to find out. Thanks. -Joe On 2/27/10 12:26 AM, Willy Tarreau wrote: On Fri, Feb 26, 2010 at 03:07:40PM -0800, Joe Williams wrote: I wasn't able to find it in the documentation, what does the "sL" termination flag stand for? strange. It means there was a server timeout during the last transfer from the server to the client. But normally the last transfer is identified because the server has already closed, so it's strange to see a timeout. Or maybe it was a post and we got the timeout in the other direction. I'm a bit puzzled. What version was it ? Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: sL flag
Willy, This was on 1.3.23, it might have been a POST I will need to go back through the logs to find out. Thanks. -Joe On 2/27/10 12:26 AM, Willy Tarreau wrote: On Fri, Feb 26, 2010 at 03:07:40PM -0800, Joe Williams wrote: I wasn't able to find it in the documentation, what does the "sL" termination flag stand for? strange. It means there was a server timeout during the last transfer from the server to the client. But normally the last transfer is identified because the server has already closed, so it's strange to see a timeout. Or maybe it was a post and we got the timeout in the other direction. I'm a bit puzzled. What version was it ? Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
sL flag
I wasn't able to find it in the documentation, what does the "sL" termination flag stand for? Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: Tuning HAProxy on EC2 instances?
We use haproxy and EC2 instances as load balancers for our clusters. The tuning we use is pretty standard (somaxconn, nf_conntrack_max, tcp_fin_timeout, rmem_max, wmem_max and etc) running vanilla ubuntu AMIs. While EC2's instances and network have performance problems it is possible to get reasonable reliability and performance from them. We push between 10s of Mbps through a single c1.medium without issues, not sure about beyond that. -Joe On 1/31/10 3:14 PM, Willy Tarreau wrote: Hi Alexander, On Sun, Jan 31, 2010 at 11:36:02PM +0100, Alexander Staubo wrote: Has anyone any experience tuning HAProxy for performance when running on Amazon EC2 instances? For example, are there any kernel parameters that should be tuned differently, or are some instance types better than others? Does HAProxy generally perform well on EC2? well, last year I helped some guys in charge of a world wide sports event which was hosted there. The performance was terrible. Completely unstable. It was impossible to tune anything. Ping times would vary a lot. It was impossible to know where the bottlenecks were, because every machine was showing limited performance in turn without necessarily having its CPU saturated. It was noticed that the internal network was at least faulty, because the observed network congestions were not constat and moving between machines. Sometimes it was even almost impossible to type in SSH. We also discovered that when they bought new nodes, some of them were under massive attacks, most likely because people who are attacked quickly drop the nodes with the IPs that belong to them and create new ones. So the attacked ones will be picked by the next customer... Finally they moved to a real hosting company with real machines and real performance in order to be able to participate at least to a little part of the event. In this experience, I think that for them, everything was virtual : the machines, the network, the support, the availability, the visitors and finally the profit. I really can't say what you could play on to improve quality. After having spent 3 full nights working with them on their machine, no sensible trend appeared whatever we did. I think the real knobs are outside your scope, on the other side of the VM :-/ Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: sending arbitrary errors to client
Cool, I'll take a look and see if it's something that I can handle. -Joe On 12/21/09 9:24 PM, Willy Tarreau wrote: On Mon, Dec 21, 2009 at 04:12:01PM -0800, Joe Williams wrote: Willy, Has this been added to the dev releases of 1.4? just on the todo list, and 1.4 is not finished BTW. The dev has slowed down because of the difficulties I encountered with a few prerequisite adaptations for later keep-alive support. By the way, if you're interested in implementing the feature yourself, it's not that hard. Basically you have to copy the redirect rules. Or even better, we could extend the redirect rules so that the "return" rules appear in the same list and their ordering is respected. That way we could have : redirect XXX if YYY return file XXX if YYY redirect XXX if YYY and have everything processed as expected. The code to load the file already exists since it's used by "errorfile". We could in fact make the "return" statement be a redirect rule with a status 200. Then we would support at least 'file XXX' or 'content "string"' for small data. Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: sending arbitrary errors to client
Willy, Has this been added to the dev releases of 1.4? Thanks. -Joe On 9/24/09 10:17 PM, Willy Tarreau wrote: On Thu, Sep 24, 2009 at 03:41:06PM -0700, Joe Williams wrote: Cool, something like this would be great to have in the arsenal. Think it will make it into 1.4? I hope, I have noted it here. But I hope this will not be used to transform haproxy into a RAM-based server :-/ Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: connslots and frontend/backend controls
Willy, I am just curious if any progress has been made on the persistence layer and thus enabling my idea of "total connections per host header". Thanks. -Joe On 10/10/09 12:20 AM, Willy Tarreau wrote: On Fri, Oct 09, 2009 at 05:19:46PM -0700, Joe Williams wrote: Thanks Willy, I appreciate you looking into it. Can you detail how this new verb might work? From your description it sounds like it will be just the total connections to a specific backend? Or will it be more similar to be_sess_rate but for total connections rather than rate? I am looking to do something like "total connections per host header" with the idea that I can send 503s if a single host header is flooding a backend. It will be per backend, just like be_sess_rate. The "total connections per host header" is something which will require that all the work on persistence is completed first, because it requires storing keyed data and accounting based on these data. This is the same reason we currently can't limit the number of concurrent connections per source IP. Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: connslots and frontend/backend controls
Thanks Willy, I appreciate you looking into it. Can you detail how this new verb might work? From your description it sounds like it will be just the total connections to a specific backend? Or will it be more similar to be_sess_rate but for total connections rather than rate? I am looking to do something like "total connections per host header" with the idea that I can send 503s if a single host header is flooding a backend. -Joe On Fri, 9 Oct 2009 22:52:38 +0200 Willy Tarreau wrote: > On Fri, Oct 09, 2009 at 11:08:09AM -0700, Joe Williams wrote: > > > > I am attempting to limit the number of connections on a per host > > header basis. Currently each host header has it's own ACL and > > backend. This allows me to use maxconns and limit the number of > > connections per host header even if the physical server is the same > > in both backends. > > > > ### > > > > frontend http_proxy > > bind :8080 > > > > acl test1 hdr(host) test1.localhost:8080 > > > > acl test2 hdr(host) test2.localhost:8080 > > > > use_backend test1 if test1 > > > > use_backend test1 if test1 > > > > backend test1 > > server test localhost:5984 maxconn 1000 > > > > backend test2 > > server test localhost:5984 maxconn 1000 > > > > ### > > > > What I am wanting to do is use a single backend and decide the > > connections per host header in the frontend. I took a look at > > dst_conn and connslots and I don't think they will work for this as > > I am using a single frontend (dst_conn) and attempting to use a > > single backend (connslots). The idea would be "if HOSTHEADER has > > less than 1000 connections forward to SOMEBACKEND". > > > > Is something like this possible? > > I would have sworn there was something like that, but after checking > the doc it appears I was wrong. I might have confused with > be_sess_rate(). > > It's pretty simple to create a new ACL verb to match on the number of > connections per backend, as it is for almost anything that appears on > the stats page BTW. I'm adding that to the short-term TODO list. > > Willy > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: Connection Pooling
I would be interested in hearing about this possibility too, however in my case HTTP is okay. I will be in a similar multi-datacenter HA situation soon and something like this would be very cool. -Joe On Thu, 8 Oct 2009 06:22:58 -0700 Chris Goffinet wrote: > I was wondering if anyone has considered or if its possible (am I > missing something?) to do connection pooling in haproxy for TCP > backends? We've been using haproxy internally at Digg and it's > working out really well. Before joining Digg, at Yahoo we had > something very similar to haproxy, that supported connection pooling. > The general idea is that once you start running multiple datacenters > with multiple backends, the latency of TCP ACK between those > datacenters really matters when failures start occurring and you need > high availability and failover of backend services. > > The majority of our services are TCP based, not HTTP so keep alive > is out. I was wondering if this has ever been considered or possible > today? I can't find much info in the open source world regarding > doing such things, and thought I'd ask here. > > -Chris > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
connslots and frontend/backend controls
I am attempting to limit the number of connections on a per host header basis. Currently each host header has it's own ACL and backend. This allows me to use maxconns and limit the number of connections per host header even if the physical server is the same in both backends. ### frontend http_proxy bind :8080 acl test1 hdr(host) test1.localhost:8080 acl test2 hdr(host) test2.localhost:8080 use_backend test1 if test1 use_backend test1 if test1 backend test1 server test localhost:5984 maxconn 1000 backend test2 server test localhost:5984 maxconn 1000 ### What I am wanting to do is use a single backend and decide the connections per host header in the frontend. I took a look at dst_conn and connslots and I don't think they will work for this as I am using a single frontend (dst_conn) and attempting to use a single backend (connslots). The idea would be "if HOSTHEADER has less than 1000 connections forward to SOMEBACKEND". Is something like this possible? Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: sending arbitrary errors to client
Cool, something like this would be great to have in the arsenal. Think it will make it into 1.4? -Joe On Thu, 24 Sep 2009 22:53:33 +0200 Willy Tarreau wrote: > On Thu, Sep 24, 2009 at 10:11:34AM -0700, Joe Williams wrote: > > > > Is it possible to send arbitrary errors to the client based on an > > ACL? Something similar to the "block" directive but where I can > > determine the error sent to the client? > > no, and you're right to remind me about this because I once needed > it and thought I would implement it later since it's not that hard. > Obviously I have forgotten :-/ > > I wanted to be able to send a file to a client for a given URL. We > already have everything to do this, it's basically a combination > of what we do with "errorfile" and ACLs. I don't want to implement > open-coded responses directly in the config though because that > will become a terrible mess. > > Willy > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: backend 412 http errors
Additionally, when was "option accept-invalid-http-response" added to the configuration API. I am running 1.13.15.1 and the "-c" option tells me it doesn't know what it is. Thanks again. -Joe On Thu, 20 Aug 2009 22:05:18 +0200 Willy Tarreau wrote: > On Thu, Aug 20, 2009 at 10:14:39AM -0700, Joe Williams wrote: > > > > I seem to be seeing 412 error codes from my backend in my haproxy > > logs and from what I can tell haproxy is producing 502 errors when > > this happens. In my case the 412 is expected and I would like to > > pass it along to the client. Is there an option to do this? > > haproxy has no reason to block a 412 response. However, it is very > likely that a header in the response is invalid, making the response > itself invalid. > > You can check this by enabling "option accept-invalid-http-response" > and seeing if it doesn't do that anymore. If so, I suggest that you > fix your application for the invalid header, otherwise there will > always be one part of the net who will have trouble accessing your > site. > > In order to figure out what header is wrong, I suggest that you enable > the global stats socket and connect to it using "socat", then issue > the "show errors" command. It will report a capture of last invalid > request and response for each frontend/backend, with the exact > location of the first anomaly found. > > It is very helpful to web developers how encounter trouble pushing > their apps in production. > > Regards, > Willy > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: backend 412 http errors
Willy, Thanks for the response. I am getting nothing with "show errors" however "show stat" is giving showing me the eresp count as I would expect from the logs. Additionally the logs are showing "SD" on the 412 errors which seems odd considering the backend server shouldn't be having any connection issues. So I am still trying to nail down what the issue is, it seems that that either the http client lib or the backend server may be doing something funny. By the way, is there any performance hit to leaving that socket open? Is there a list of commands somewhere in the docs? Thanks. -Joe On Thu, 20 Aug 2009 22:05:18 +0200 Willy Tarreau wrote: > On Thu, Aug 20, 2009 at 10:14:39AM -0700, Joe Williams wrote: > > > > I seem to be seeing 412 error codes from my backend in my haproxy > > logs and from what I can tell haproxy is producing 502 errors when > > this happens. In my case the 412 is expected and I would like to > > pass it along to the client. Is there an option to do this? > > haproxy has no reason to block a 412 response. However, it is very > likely that a header in the response is invalid, making the response > itself invalid. > > You can check this by enabling "option accept-invalid-http-response" > and seeing if it doesn't do that anymore. If so, I suggest that you > fix your application for the invalid header, otherwise there will > always be one part of the net who will have trouble accessing your > site. > > In order to figure out what header is wrong, I suggest that you enable > the global stats socket and connect to it using "socat", then issue > the "show errors" command. It will report a capture of last invalid > request and response for each frontend/backend, with the exact > location of the first anomaly found. > > It is very helpful to web developers how encounter trouble pushing > their apps in production. > > Regards, > Willy > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
backend 412 http errors
I seem to be seeing 412 error codes from my backend in my haproxy logs and from what I can tell haproxy is producing 502 errors when this happens. In my case the 412 is expected and I would like to pass it along to the client. Is there an option to do this? Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: haproxy include config
Holger's config utility inspired me to write my own that fit my needs a bit better. It's called haproxy_join and basically does the job of combining configs for you, creates a backup of the old config and etc. The code is available at: http://github.com/joewilliams/haproxy_join/tree/master And the obligatory blog post is at: http://www.joeandmotorboat.com/2009/07/01/introducing-haproxy_join-and-how-to-use-it-with-chef/ -Joe On Fri, 26 Jun 2009 03:36:46 +0200 Willy Tarreau wrote: > On Tue, Jun 16, 2009 at 06:43:56AM +0200, Timh Bergström wrote: > > 2009/6/15 Holger Just : > > > > > > So, after checking with my chief about opensourcing our stuff I > > > can finally conclude: Yes we can! :) > > > > > > You can find the script at > > > http://github.com/finnlabs/haproxy/tree/master > > > > > > --Holger > > > > > > > > > > Nice, that looks interesting! I'll tell you if I ever put it into > > production. A big thanks to you and your employer and grats on the > > decision to OSS it. :-) > > Hi guys, > > a few days ago, in order to address a different need, I have > implemented the ability to load multiple config files. The initial > goal was to use a distinct file for the global section, but it > revealed useful to load a config by chunks too. > > Right now there are important limitations : > - 10 different files max (can easily be changed) > - each file starts with a new section > > There is no "include" directive, all the files are specified with "-f > file" on the command line. > > The code is not yet in my tree, it's lying at Exosec right now. But > I'll merge it soon as it's useful (at least for globals and/or > defaults). > > Regards, > Willy > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
Re: load balancing based off type of request
Thanks for the help! -Joe Willy Tarreau wrote: On Mon, Jan 26, 2009 at 09:28:48PM -0800, Joe Williams wrote: I am attempting to load balance based off of the type of request (POST, PUT, DELETE, GET, etc). Sending GETs to all backend servers and POST, DELETE and PUT to only one. From the documentation it looks like this might be possible with an ACL or maybe some regex. I tried a couple configurations but didn't get anywhere. Is this possible? If so, can someone point me in the right direction or give an example? yes it is possible. You could find examples in configuration.txt and in the examples directory. Basically, it would work like this : frontend XXX bind :80 ... acl reserved_method POST DELETE PUT use_backend reserved_backend if reserved_method default_backend normal_backend backend normal_backend ... server srv1 server srv2 backend reserved_backend ... server srv1 server srv2 And so on. You get the idea. Regards, Willy -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/
load balancing based off type of request
I am attempting to load balance based off of the type of request (POST, PUT, DELETE, GET, etc). Sending GETs to all backend servers and POST, DELETE and PUT to only one. From the documentation it looks like this might be possible with an ACL or maybe some regex. I tried a couple configurations but didn't get anywhere. Is this possible? If so, can someone point me in the right direction or give an example? Any help would be appreciated. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/