Using data from capture request header Host
Hi, First congrats for this great software! I'm forcing https in haproxy (https requests come from stunnel): # # Rewrite Rules # # Set acl first acl secure src 127.0.0.1 redirect prefix https://www.domain.com if !secure But I have multiple subdomains (not fixed) client1.domain.com, client2.domain.com, client-n.domain.com... and for the redirect to https I need to get and pass the host header. I see that haproxy can capture the host capture request header Host len 15 But only for logging? I need use the Host header to redirect redirect prefix https://{HOST} if !secure Is possible? Many thans in advance Jose Luis
getting BADREQ on logs in ssl requests
guys, we have setup an haproxy for http and ssl traffic, so far all worked as expected. but today looking at the request logs each time some user goes to the ssl part of the site I can see in the logs BADREQ but the request goes just fine. what does this mean? how do I fix this? Nov 9 08:14:59 localhost.localdomain haproxy[14783]: 190.191.225.213:50871 [09/Nov/2009:08:14:59.167] load_balanced_http load_balanced_http/webserver4 0/0/2/7/348 200 9950 - - 0/0/0/0/0 0/0 GET / HTTP/1.1 Nov 9 08:15:03 localhost.localdomain haproxy[14783]: 190.191.225.213:50885 [09/Nov/2009:08:15:02.590] load_balanced_https load_balanced_https/webserver2 -1/1/1/-1/828 200 11497 - - 0/0/0/0/0 0/0 BADREQ this is my current configuration global maxconn 15000 # Total Max Connections. log 127.0.0.1 local0 log 127.0.0.1 local1 notice daemon nbproc 1 # Number of processes userhaproxy group haproxy defaults log global option httplog modetcp clitimeout 6 srvtimeout 3 contimeout 4000 retries 3 option redispatch listen load_balanced_https AAA.BBB.CCC.DDD:443 balance source option ssl-hello-chk modetcp option httpclose option forwardfor server webserver1 AAA.BBB.CCC.DDD weight 1 maxconn 5000 check server webserver2 AAA.BBB.CCC.DDD weight 1 maxconn 5000 check listen load_balanced_http AAA.BBB.CCC.DDD:80 balance roundrobin modehttp option forwardfor server webserver4 AAA.BBB.CCC.DDD weight 1 maxconn 5000 check server webserver3 AAA.BBB.CCC.DDD weight 1 maxconn 5000 check server webserver5 AAA.BBB.CCC.DDD weight 1 maxconn 5000 check backup listen admin_stats 127.0.0.1:80 modehttp stats uri /proxy-stats stats realm Global\ statistics -- Gabriel Sosa Si buscas resultados distintos, no hagas siempre lo mismo. - Einstein
Re: ACL Question
Hi Guys, I appreciate the responses, over the weekend I decided to test with using NFS and a single caching server for the application caching module and it worked great, so I don't have to set haproxy to try to send the same request to multiple servers *S* I just have to send it to a single box now. I was just curious if it could be done. *S* Love Haproxy and I recommend it to every one now. Joe Willy Tarreau wrote: Hi, On Fri, Nov 06, 2009 at 11:35:24AM +0100, XANi wrote: Hi, On Thu, 05 Nov 2009 19:44:03 -0500, Joseph Hardeman jharde...@colocube.com wrote: Hi Everyone, I know you can use acl's to take a request for a file and send it to a different backend than the normal requests go to, but I was wondering can an acl be setup so that when a request for a file, say update.php, is called via the external url, for example: http://www.example.com/update.php Instead of sending it to a single server can you send it to all of the backend servers at the same time? (...) AFAIK there isn't any possibility to do "send reqest to that backend AND do something else" (i'd love having possibility to use external rewriting software, like squid can). indeed, it is not possible to play a request multiple times (and this has nothing to do with ACLs). What kind of cache do u use ? If it's memcached u can make one big "global" cache quite easily (in most client libs u just need to specify all servers in same order), and in other types of cache you would have to have script that whne cache gets updated on one backend it sends updates to other ones. It's often quite common to see people send remote actions to directed target servers, most often it's just to verify that all servers are up to date. For this they simply use cookies. If you set a passive cookie for each of your cache servers, you can decide which one you use and your script can simply use that : cookie SRV server cache1 1.1.1.1 cookie c1 ... server cache2 1.1.1.2 cookie c2 ... server cache3 1.1.1.3 cookie c3 ... Regards, Willy -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
Re: Segfault if using long ACLs
Hi Holger, first, thank you very much for such a detailed bug report ! At first it looked impossible to me because there are tests everywhere to ensure that the last arg points to a null string. But after a more careful read, I think I have found the issue. Here in cfgparse.c, we iterate over the parameters : 3633arg = 0; 3634args[arg] = line; 3635 3636while (*line arg MAX_LINE_ARGS) { ... And here we ensure that we set the last arg to point to the last char of the line : 3700/* zero out remaining args and ensure that at least one entry 3701 * is zeroed out. 3702 */ 3703while (++arg = MAX_LINE_ARGS) { 3704args[arg] = line; 3705} But this is wrong for two reasons : - if we stop at 3636 on arg == MAX_LINE_ARGS, at 3703, we don't enter the second loop because ++arg is already larger than MAX_LINE_ARGS. In fact there is a reason for this, we don't want to scratch the last word when it's not a space. - even if we got there, line would not point to the last char so we would not have an empty string anyway. In fact the test would be valid if we could ensure that *line = 0 in case of truncated line. Also it would be wise to report it because as you tell it, you don't know if your configuration is working or not. So let's reject the line and indicate where it broke. The following patch fixes it. Thanks! Willy From 3b39c1446b9bd842324e87782a836948a07c25a2 Mon Sep 17 00:00:00 2001 From: Willy Tarreau w...@1wt.eu Date: Mon, 9 Nov 2009 21:16:53 +0100 Subject: [BUG] config: fix wrong handling of too large argument count Holger Just reported that running ACLs with too many args caused a segfault during config parsing. This is caused by a wrong test on argument count. In case of too many arguments on a config line, the last one was not correctly zeroed. This is now done and we report the error indicating what part had been truncated. --- src/cfgparse.c | 15 +++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/src/cfgparse.c b/src/cfgparse.c index f0d690b..a1d2428 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -3697,6 +3697,21 @@ int readcfgfile(const char *file) if (!**args) continue; + if (*line) { + /* we had to stop due to too many args. +* Let's terminate the string, print the offending part then cut the +* last arg. +*/ + while (*line *line != '#' *line != '\n' *line != '\r') + line++; + *line = '\0'; + + Alert(parsing [%s:%d]: line too long, truncating at word %d, position %d : %s.\n, + file, linenum, arg + 1, args[arg] - thisline + 1, args[arg]); + err_code |= ERR_ALERT | ERR_FATAL; + args[arg] = line; + } + /* zero out remaining args and ensure that at least one entry * is zeroed out. */ -- 1.6.4.4
Re: getting BADREQ on logs in ssl requests
Hi, On Mon, Nov 09, 2009 at 12:25:29PM -0200, Gabriel Sosa wrote: guys, we have setup an haproxy for http and ssl traffic, so far all worked as expected. but today looking at the request logs each time some user goes to the ssl part of the site I can see in the logs BADREQ but the request goes just fine. what does this mean? how do I fix this? Pretty amazing, this bug has been around since almost the beginning it seems and nobody caught it yet ! This is caused by option httplog in your default settings which gets inherited by the https instance which then tries to log in http. I thought there was a check for this, and obviously I was wrong. defaults log global option httplog ^^^ modetcp ... listen load_balanced_https AAA.BBB.CCC.DDD:443 balance source option ssl-hello-chk modetcp ^^^ Also be careful, the following options are wrong too in HTTPS (since haproxy can't touch the stream). However they are just harmless, but may become invalid and cause an error when checks become stricter : option httpclose option forwardfor ... I've committed the following patch which emits a warning in case of such a wrong setting which might be hard to catch. It also automatically falls back to tcplog for a TCP proxy. Thanks for the report! Willy From 5f0bd6537f8b56b643ef485d7a3c96d996d9b01a Mon Sep 17 00:00:00 2001 From: Willy Tarreau w...@1wt.eu Date: Mon, 9 Nov 2009 21:27:51 +0100 Subject: [BUG] config: disable 'option httplog' on TCP proxies Gabriel Sosa reported that logs were appearing with BADREQ when 'option httplog' was used with a TCP proxy (eg: inherited via a default instance). This patch detects it and falls back to tcplog after emitting a warning. --- src/proxy.c |5 + 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/src/proxy.c b/src/proxy.c index 69b070e..15f9b92 100644 --- a/src/proxy.c +++ b/src/proxy.c @@ -327,6 +327,11 @@ int proxy_cfg_ensure_no_http(struct proxy *curproxy) Warning(config : Layer 7 hash not possible for %s '%s' (needs 'mode http'). Falling back to round robin.\n, proxy_type_str(curproxy), curproxy-id); } + if (curproxy-to_log (LW_REQ | LW_RESP)) { + curproxy-to_log = ~(LW_REQ | LW_RESP); + Warning(config : 'option httplog' not usable with %s '%s' (needs 'mode http'). Falling back to 'option tcplog'.\n, + proxy_type_str(curproxy), curproxy-id); + } return 0; } -- 1.6.4.4
Re: Using data from capture request header Host
Many thanks Willy. 2009/11/9 Willy Tarreau w...@1wt.eu Hi, On Mon, Nov 09, 2009 at 10:59:33AM +0100, Jose Luis Gordo Romero wrote: Hi, First congrats for this great software! I'm forcing https in haproxy (https requests come from stunnel): # # Rewrite Rules # # Set acl first acl secure src 127.0.0.1 redirect prefix https://www.domain.com if !secure But I have multiple subdomains (not fixed) client1.domain.com, client2.domain.com, client-n.domain.com... and for the redirect to https I need to get and pass the host header. We have already received this request a few times, but it's not possible yet. At first I thought we could get this easily done by simply copying the host header but it's not that easy in fact. If the host header specifies a port and we are switching the protocol, then we should strip the port, or possibly replace it with another one. Or maybe we could simply consider that we always strip the port and always use the Host header, but even then, some people like to use another host name (eg: X-Forwarded-Host). So this still requires a little bit more thinking. Regards, Willy
Re: We have been playing around with the new RDP cookie feature in 1.4-dev4 and it works really well...
Hi Malcolm, I'M using haproxy for RDP dispatcher but in tcp mode with balance source. This setup will allow a laptop user which goes in sleep mode to go back on the same server when it will wake up 2 hours later. I would be interested to ear if you have laptop users in your setup and if the user will end up on the same backend server after a 1 hour sleep period ? Will the RDP cookie be the same after a wakeup ? Thanks for sharing this ! Guillaume Malcolm Turnbull a écrit : We have been playing around with the new RDP cookie feature in 1.4-dev4 and it works really well... One of our guys Nick has written a blog about his configuration and testing of Windows Terminal Servers with Windows an Linux RDP clients. We would welcome any feedback from anyone using a similar configuration. http://blog.loadbalancer.org/ or http://blog.loadbalancer.org/load-balancing-windows-terminal-server-%E2%80%93-haproxy-and-rdp-cookies/ Thanks. -- Regards, Malcolm Turnbull. Loadbalancer.org Ltd. Phone: +44 (0)870 443 8779 http://www.loadbalancer.org/ -- Guillaume Bourque, B.Sc., consultant, infrastructures technologiques libres ! 514 576-7638
Re: We have been playing around with the new RDP cookie feature in 1.4-dev4 and it works really well...
Guillaume, I think you would need to increase the clitimeout and srvtimeout to 2 hours if you wanted it to be seemless. Otherwise you would need to reconnect with the same username to join the same session if those timeouts had expired. Its not like HTTP cookies that are remember on the client (the login id is only sent when establishing a connection) Nick can you run this specific test tomorrow, please? 2009/11/9 Guillaume Bourque guillaume.bour...@gmail.com Hi Malcolm, I'M using haproxy for RDP dispatcher but in tcp mode with balance source. This setup will allow a laptop user which goes in sleep mode to go back on the same server when it will wake up 2 hours later. I would be interested to ear if you have laptop users in your setup and if the user will end up on the same backend server after a 1 hour sleep period ? Will the RDP cookie be the same after a wakeup ? Thanks for sharing this ! Guillaume Malcolm Turnbull a écrit : We have been playing around with the new RDP cookie feature in 1.4-dev4 and it works really well... One of our guys Nick has written a blog about his configuration and testing of Windows Terminal Servers with Windows an Linux RDP clients. We would welcome any feedback from anyone using a similar configuration. http://blog.loadbalancer.org/ or http://blog.loadbalancer.org/load-balancing-windows-terminal-server-%E2%80%93-haproxy-and-rdp-cookies/ Thanks. -- Regards, Malcolm Turnbull. Loadbalancer.org Ltd. Phone: +44 (0)870 443 8779 http://www.loadbalancer.org/ -- Guillaume Bourque, B.Sc., consultant, infrastructures technologiques libres ! 514 576-7638 -- Regards, Malcolm Turnbull. Loadbalancer.org Ltd. Phone: +44 (0)870 443 8779 http://www.loadbalancer.org/