Problems putting a persistence cookie in the defaults section
Hi I wanted to put a persistence cookie in the defaults section of my config, but I ran into 2 problems on reloading HAproxy as a result: - Every tcp backend in my config throws a warning like this: [WARNING] 185/153532 (25427) : config : 'cookie' statement ignored for proxy 'sometcpbackend' as it requires HTTP mode. - My stats config doesn't have a server list, so reload fails with: [ALERT] 185/153532 (25427) : config : HTTP proxy stats has a cookie but no server list ! My stats listener looks like this: listen stats 0.0.0.0:9111 mode http stats uri / For the first case, I understand why the warning is emitted, but perhaps it'd make sense to only output the warning if the cookie definition is specifically attempted on the tcp proxy itself, rather than inherited from the defaults section. Come to think of it, perhaps TCP backends should just not inherit the cookie definition at all? I'm not great at tracing my way through C code unfortunately, so I have no idea how practical these are. For the second case, I'm not sure if my stats listener is wrong in some way. I inherited the config more than 2 years ago, and while I've become fairly familiar with most of it, I've never really needed to tamper with the stats part, so it may be it isn't optimally defined, but looking at the docs has not helped me see what's wrong, if anything. Help! :-( Regards, Graeme.
Re: Re: Check backend servers
On 10 February 2012 14:50, Sebastian Fohler i...@far-galaxy.de wrote: What URL does haproxy use exactly to check the service? Is it the realm + the url part or something else? Just to be sure to test the correct option. Hi Sebastian If you are just using the check option for the backends, then the health check is considered successful if a successful TCP connection can be established on the IP/port specified for the backend. This is often not a good indicator of health for HTTP applications, and for those you can use option httpchk to do actual HTTP requests. This option may take any of these forms: option httpchk - Does an HTTP/1.0 GET for the URI / option httpchk uri - As above, but you can specify the URI to use instead of / option httpchk method uri - As above, but you can specify the HTTP method as well (GET, POST, etc.) option httpchk method uri version, As above but you can also specify the version, i.e. if you want to use HTTP/1.1 instead. To use this you probably need to send the HTTP Host: header as well, you can see in my example below how that's done. In all cases, the health check is considered successful if the HTTP status code returned from the backend is 2xx or 3xx. The last form is typically the most useful. Let's say your app is www.example.com and you decide that retrieving the URI /test is the way to determine if it's healthy or not, you would then use the following in the backend definition: option httpchk GET /test HTTP/1.1\r\nHost:\ www.example.com If you still have questions, please post the backend definition from your config file. Graeme. , which the docs (http://haproxy.1wt.eu/download/1.4/doc/configuration.txt) describe as follows:
Re: Check backend servers
On 10 February 2012 16:57, Baptiste bed...@gmail.com wrote: Configure it like that: option httpchk HEAD /index.php HTTP/1.0\r\nHost:\ www.domain.com == please note the backslashes ( \ ) before the spaces. You should use HTTP/1.1 if you're sending a Host: header. Graeme.
Re: Does haproxy support cronolog?
On 31 January 2012 11:21, wsq003 wsq...@sina.com wrote: Hi Here we want haproxy to write logs to separate log files (i.e. /home/admin/haproxy/var/logs/haproxy_20120131.log), and we want to rotate the log files. Then cronolog seems to be a good candidate. HAproxy can only log to a syslog daemon currently, and this is unlikely to change. Graeme.
Re: http to https redirection
On 19 December 2011 16:37, MEßNER Arthur,Ing.Mag. arthur.mess...@tilak.at wrote: hello, is there any method to do http to https redirection with variable Location my configuration: frontend someserver-clear bind 10.16.246.9:80 acl clear dst_port 80 redirect location https://someserver.somedomain/ if clear it works but redirects something like this http://soemserver.somedomain/login/index.html to https://someserver.somedomain/ - no path Use redirect prefix instead of redirect location. Prefix will retain the rest of the URI. HTH, Graeme.
Re: haproxy and interaction with VRRP
On 12 December 2011 11:18, Vincent Bernat ber...@luffy.cx wrote: Hi! When haproxy is bound to an IP address managed by VRRP, this IP address may be absent when haproxy starts. What is the best way to handle this? 1. Start haproxy only when the host is master. 2. Use transparent mode. 3. Patch haproxy to use IP_FREEBIND option. On Linux it's possible to enable binding to an address which isn't associated with a device on your system. This is what we do on our HAproxy boxes and we've never had a problem with it in 2 years. This works for Debian/Ubuntu, adjust as needed for whichever distro you're using: echo net.ipv4.ip_nonlocal_bind=1 | sudo tee -a /etc/sysctl.conf sudo sysctl -p Once that's done, HAproxy (and any other app actually) can bind to your VRRP addresses even when the server doesn't currently have the addresses associated with any network interfaces. HTH, Graeme.
Re: cannot bind socket Multiple backends tcp mode
On 3 November 2011 21:34, Saul s...@extremecloudsolutions.com wrote: My understanding was that multiple backends could use the same interface, perhaps I was wrong, if that is the case, any suggestions on how to be able to have multiple backends running tcp mode on port 443 so I can match the url and redirect to the appropriate backend from my HAproxy? You can have multiple backends with a single frontend, and define various criteria to decide which backend to use for each incoming request. Having said that, there are problems in your configuration. Firstly, you are defining 2 frontends listening on the same port (bind :443 twice), which is causing the cannot bind socket message. Second, you are attempting to use hdr_beg to match an HTTP Host: header, which you cannot do when HAproxy is handling SSL traffic in TCP mode, because HAproxy cannot read the HTTP request, it's encrypted. In order to use hdr_beg and similar criteria, you must be using plain HTTP, which requires you to use stunnel, nginx or something else in front of HAproxy to handle the SSL and make a plain HTTP connection to HAproxy. Hope this helps, Graeme.
Re: Remote IP’s with HAProxy
On 24 October 2011 11:42, Iceskysl icesk...@gmail.com wrote: I’m testing a new web server setup which is having a couple of issues. Essentially, we have a web server, where the code uses the remote IP for some interesting things, and also some apache directories secured down to some certain IP’s (our office etc). However, we’ve just chucked this behind ha_proxy so we can look at adding some more app servers, but now the remote IP is always coming through as the proxy ip(127.0.0.1), not the real remote user. This means we can’t get to some locations, and our app is behaving a little oddly where user IP is important. There are 3 popular ways of tackling this that I can think of. 1. Use Apache's mod_rpaf (http://stderr.net/apache/rpaf/), which lets you take the client IP in the X-Forwarded-For header and treat it as the client IP. To do this you need to have option forwardfor in your HAproxy configuration. 2. Add the X-Forwarded-For header as before using option forwardfor, and change your application to look at that header instead of the client IP. 3. Use HAproxy in transparent mode, which has its own config requirements, but honestly I'm not 100% clear on what they are as I've never gone this route. Graeme.
Re: TCP health checking for redis
On 9 September 2011 13:49, John Helliwell john.helliw...@gmail.com wrote: I'm trying to have haproxy send requests to 4 backends which are redis servers. Only one of the four is master, and the other 3 are slaves. I want to health check by sending an INFO command, to which redis will reply $640 redis_version:1.3.15 If the response from the check includes the reply role:master, I want the health check to succeed, else fail. listen redis localhost:6379 mode tcp ... option httpchk INFO \r\n http-check expect rstring master I know I'm trying to do something odd with httpchk, and perhaps I need to craft a better regexp for my expect string. Could anyone assist? Hi John The something odd you are doing is attempting to use the httpchk option to speak a non-HTTP application protocol. Using the check option in server definitions will make HAproxy check if it can successfully establish a TCP connection to the IP/port for the server. Successful connection == successful health check. There isn't a way to send a specific command to the application (INFO in your case), nor parse for specific reponse/s. Using option httpchk allows us to do smarter health checks when a backend is speaking HTTP. It sends a specifically crafted HTTP request, and HAproxy will expect responses with specific HTTP response codes. This can be further fine-tuned using http-check expect Unfortunately, redis does not speak HTTP, and thus you cannot use option httpchk or http-check expect ... with it. Your health check options are limited to testing TCP connectivity to the IP/port of each server. Regards, Graeme.
Re: TCP health checking for redis
On 9 September 2011 14:44, John Helliwell john.helliw...@gmail.com wrote: Indeed, the httchk is expecting a HTTP response header. I think I can fool it by installing a wrapper script on the target which inserts a valid HTTP response header - there is an example of that at http://sysbible.org/2008/12/04/having-haproxy-check-mysql-status-through-a-xinetd-script/ That could work. In case someone stumbles across this in the archives, it's worth mentioning that a mysql-specific health check does exist in the current versions of HAproxy and the hack described in the above URL is no longer needed *for mysql*. Nevertheless, the principle described could be used for your redis checks. Regards, Graeme.
Re: Can't bind to Virtual IP
On 11 August 2011 16:16, Ran S r...@sheinberg.net wrote: Hello, I am trying to set up a binding to a Virtual IP in order to use master and slave HAProxy load balancers. I am following each of the two following guides: http://www.highscalable.org/haproxy-and-keepalived-for-highly-performance-load-balancing-web-technique http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-heartbeat-on-debian-lenny-p2 But the majority of the guides are not relevant to my problem. as far as I understand, in order for a frontend to use a different IP than the machine's IP (in an internal network), all is needed is: 1. In HAProxy.cfg set, under a listen or frontend node: bind some-nonexistent-ip:someport 2. In /etc/sysctl.conf add net.ipv4.ip_nonlocal_bind=1 and run sysctl -p At this point the some-nonexistent-ip should be reachable from the network? in my case, it simply doesn't happen. should I see it under ifconfig on the machine? I realize this may not be the best place to ask the question, but it's very relevant to HAProxy for redundancy purposes. You're almost there. What you have done so far is configured your system so that HAproxy can *bind* an IP that doesn't exist on the machine. In order to actually use that IP, it still must be bound to the machine. This is typically done using keepalived, which is described in the first link you mention ( http://www.highscalable.org/haproxy-and-keepalived-for-highly-performance-load-balancing-web-technique). I suggest you configure keepalived as described in that article. Regards, Graeme.
Re: Question concerning option forwardfor and HTTP keep-alive
On 3 August 2011 17:56, Willy Tarreau w...@1wt.eu wrote: On Wed, Aug 03, 2011 at 11:41:03AM -0400, Guillaume Bourque wrote: Hi all, So to answer the secific question from what I have seen as soon as you use option http-server-close In the apache or any backend log you will only see the client ip on the fiist log of a specific http request and you will have a - for the client ip on the other page request in the same session. Is this what we sould see ? no this is the opposite. If you have no option, you will have what you describe. If you have either option httpclose or better, option http-server-close, then you'll have the IP for every request. Thank you both, I understand the difference now. Graeme.
Question concerning option forwardfor and HTTP keep-alive
Hi I've been looking at decreasing page load times, and as part of this I'm revisiting a decision that was made when we started using HAproxy back in the 1.3.x era. At the time, HAproxy had no support for HTTP keep-alive, and we needed to use option forwardfor. As a result, we added option httpclose, as suggested by the following paragraph: It is important to note that as long as HAProxy does not support keep-alive connections, only the first request of a connection will receive the header. For this reason, it is important to ensure that option httpclose is set when using this option. We are currently using 1.4.8, which has (as far as I understand) support for client-side HTTP keep-alive, yet the paragraph about using option httpclose with option forwardfor is untouched in the 1.4 documentation. So to get to my actual question, with 1.4.8, can I remove option httpclose, keep option forwardfor and still see the X-Forwarded-For header added to each request made? Or does HAproxy still only analyze the first request when client-side HTTP keep-alive is being used? Graeme
HAproxy returns HTTP 502 error when backend returns an HTTP 302 with a long Location:
Hi Using HAproxy 1.4.8. One of our applications generates an HTTP 302 redirect with a really long Location. In one instance I've looked at, the Location: header is 8,175 bytes. If we bypass HAproxy, the browser happily goes to the returned URL, if we instead go via HAproxy, the 302 is turned into a 502 Bad Gateway - The server returned an invalid or incomplete response. I couldn't find any reference stating that there is a limit to the length of the Location: header, but that's all that seems odd about this. In other cases, the same application generates a 302 with a shorter Location: header which HAproxy passes on to the client, no problem. Can anyone shed some light on what the problem is here? Thanks, Graeme.
Re: HAproxy returns HTTP 502 error when backend returns an HTTP 302 with a long Location:
On 19 July 2011 21:55, Willy Tarreau w...@1wt.eu wrote: On Tue, Jul 19, 2011 at 11:06:58AM -0700, carlo wrote: Check out tune.bufsize and tune.maxrewrite in the Performance Tuning section of the HAProxy docs. Indeed. I would add something : an application which generates headers or URLs that are *that* long will never reliably work over the internet and will experience trouble through a number of components. For instance, Apache limits each line to 8192 bytes, very close to your header's length, and Apache is present everywhere. Also, the client will have to repost this request, making it even longer. BTW, if the page you're pointing to contains images, all of them will be fetched with a Referer containing that long URL. This is a very bad idea again. Imagine if the page contains 50 objects (css, js, images, ...), then the browser has to upload 51 times 8 kB or half a megabyte. This can take a huge time in many environments (ADSL, 3G, ...). All this really translates a bad initial design which should be fixed one way or another. Last, I invite you to read suggestions from the HTTP spec here : http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-15#section-3.2 Various ad-hoc limitations on header length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support messages whose combined header fields have 4000 or more octets. You're more than twice the recommended size, you're asking for trouble. In haproxy, to workaround the default limit, you can increase tune.bufsize and decrease tune.maxrewrite. I'm used to set them to 8kB and 1kB respectively because that's fine everywhere. You can set maxrewrite to 1kB and bufsize to 16kB to see if that fixes your issue, but I really invite you to fix the application before it's too late ! Regards, Willy Thanks Carlo Willy. I am in agreement, this does seem like a very bad idea, I'll have to see if they can make a design change to prevent this from becoming a future headache. Graeme.
Re: start haproxy not as root?
On 9 June 2011 00:05, Jacob Fenwick jacob.fenw...@gmail.com wrote: It seems like I must be root to start haproxy. I know that I can add a user line in global so that the process will change to say it is running as a non-root user once it is running, but it seems like I still need to be root to actually start it, or restart it. Is there any way around this? I don't think there is, and if there was, you would be unable to listen on any ports 1024, as only root can do that. Graeme.
Email to the list not delivered -- anyone else?
Hi all Has anyone else noticed instances of messages sent to the list not being delivered? I just realised that a reply I sent to Kyle's question 2 days ago never made it to the list. Notice that until now there are no messages from me to the list shown on http://www.formilux.org/archives/haproxy/1102/date.html, but I definitely sent a message 2 days ago: http://i.imgur.com/tCqRe.png *confused* Graeme.
Re: ACLs with Overlapping Subnets and IPs
On 8 February 2011 14:48, Kyle Brandt k...@stackoverflow.com wrote: Can I have an ACL that doesn't perform an action on a specific IP but will perform the action on the subnet that the IP is part of? For example: acl bad_subnet src 10.0.0.0/8 acl okay_ip src 10.0.1.5 use_backend blocked if bad_subnet !okay_ip So the target result would be to use the backend blocked if the IP is in the 10.0.0.0/8 subnet unless that IP is 10.0.1.5. If the IP is outside the 10.0.0.0/8 network no action would be take for this rule. I just tried this on 1.4.8 and it works exactly as you specified. Graeme.
Backend warnings retr/redis on stats page
Hi HAproxy 1.4.8. If I look at the stats page, on one of my backends I'm seeing these values under the warning column: retr (344) and redis (172). The backend has 8 servers and only 1 has non-zero values for this column. Can someone explain what the numbers mean, I've tried poking through the documentation but nothing stands out. Thanks, Graeme.
Re: Get real source IP
On 15 November 2010 21:09, Maxime Ducharme m...@techboom.com wrote: Hi guys We are looking for a way to get real source IP that is connecting to our web services. We currently use option forwardfor, but some people are using this to bypass our checks. Is there other way to send real IP to our web servers ? Another way to do this is to use HAproxy in transparent proxy mode. I have not used it personally, but unless I'm mistaken it functions more like a NAT/routing device instead of a proxy. Here's a short howto if you'd like to try it out: http://blog.loadbalancer.org/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy/ Regards, Graeme.
Re: x-forwarded-for logging
Hi Joe Yes, it is possible, but there's a little more work involved than just applying the patch to stunnel. Firstly, you need to specify in your stunnel.conf that you want stunnel to add the X-Forwarded-For header: [https] accept = 1.2.3.4:443 connect = 1.2.3.4:80 TIMEOUTclose = 0 xforwardedfor=yes Next you need to have the following in your haproxy config, it can go in defaults, frontend, listen or backend as appropriate for your setup: option forwardfor Finally, you need to configure your web server to use the X-Forwarded-For header. We're using Apache's mod_rpaf to do this (http://stderr.net/apache/rpaf/). Regards, Graeme. On 7 October 2010 00:31, Joe Williams j...@joetify.com wrote: I applied the x-forwarded-for patch to stunnel in hopes that haproxy would log the forwarded for address but it doesn't seem to. Is this possible? Thanks. -Joe Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ Twitter: http://twitter.com/williamsjoe
Re: HAProxy and DNS
Hi This is not currently possible. DNS queries use UDP as the transport in the vast majority of cases. TCP is rarely used. HAproxy does not do UDP load balancing. This was discussed on the list a while ago. See here for more info: http://en.wikipedia.org/wiki/Domain_Name_System#Protocol_details http://www.formilux.org/archives/haproxy/1006/3525.html Regards, Graeme. On 28 September 2010 10:51, Hamnedalen, Mikael mikael.hamneda...@logica.com wrote: Hi. Anyone used HAProxy to load balance DNS requests ? I tried # DNS frontend dns_front bind 10.216.208.20:53 default_backend dns_serv backend dns_serv server dns1 192.176.113.155:53 check server dns2 192.176.113.156:53 check # DNS END But it didn’t work. Any ideas, or any hints for any other product ? * * *Regards* *Mikael Hamnedalen* Please help Logica to respect the environment by not printing this email / Pour contribuer comme Logica au respect de l'environnement, merci de ne pas imprimer ce mail / Bitte drucken Sie diese Nachricht nicht aus und helfen Sie so Logica dabei, die Umwelt zu schützen. / Por favor ajude a Logica a respeitar o ambiente não imprimindo este correio electrónico. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Re: Matching the host header
Hi Julien While you could do this with multiple ACLs or a regex, there is a third option which is even better: use the hdr_end() function instead of hdr(). From the doc (http://haproxy.1wt.eu/download/1.4/doc/configuration.txt, see section 7.5.3. Matching at Layer 7): -8- hdr_end string hdr_end(header) string Returns true when one of the headers ends with one of the strings. See hdr for more information on header matching. Use the shdr_end() variant for response headers sent by the server. -8- If you changed your ACL definitions to this, it would work the way you expect it to: acl host_hdr_siteA hdr_end(host) -i sitea.com acl host_hdr_siteB hdr_end(host) -i siteb.org acl host_hdr_siteC hdr_end(host) -i sitec.net There are several hdr_* functions described in the doc, including one for regex, but the doc warns that regex matching is slower than simple string or substring matching, so hdr_end is a better choice in your case. Regards, Graeme. On 1 September 2010 11:51, Julien Vehent jul...@linuxwall.info wrote: Hi there, I have a quick question for which I haven't found any answer in the doc. I use haproxy as a host balancer: it stands in front of my web servers, listening on port 80, and directs incoming requests to the good backend depending on the value of the host header (site1, site2, and so on...). I have a frontend section as follow : ### frontend http-in bind *:80 default_backend siteX acl host_hdr_siteA hdr(host) -i sitea.com acl host_hdr_siteB hdr(host) -i siteb.org acl host_hdr_siteC hdr(host) -i sitec.net use_backend siteA if host_hdr_siteA use_backend siteB if host_hdr_siteB use_backend siteC if host_hdr_siteC ### The only thing is that the ACLs seem to match only exact value. Thus, if somebody tries 'www.sitea.com' instead of 'sitea.com', the acl of sitea doesn't match and the visitor is directed to the default backend. My question is: what is the clean way to do this ? Should I have two ACLs for the same site or can I use regex on header matching ? (or all of this is just simply wrong, in which case how can/should I do it :) ) Thanks, Julien
Re: Can't get server check to work with virtual hosts
Ah, I misunderstood your config. To do this you will need to split the single listen section into a frontend and 2 backends. The frontend handles the listening on a specific IP and port, as well as making the decision on which backend to use, in your case it'll be based on the Host: header. Each backend corresponds to a specific named vhost and will have its own health checks. Based on your original config, and the changed health check as per my previous email, your config would look like this. I'm going to refer to the vhosts as vhost1.example.com and vhost2.example.com, and the servers as server1.example.com and server2.example.com. global maxconn 100 frontend webfarm cluster6:23000 mode http option httpclose timeout client 5s acl host_vhost1 hdr(host) -i vhost1.example.com acl host_vhost2 hdr(host) -i vhost2.example.com use_backend be_vhost1 if host_vhost1 use_backend be_vhost2 if host_vhost2 backend be_vhost1 mode http balance roundrobin option httpclose timeout connect 5s timeout server 5s cookie SERVERID insert indirect option httpchk GET /index.html HTTP/1.1\r\nHost: vhost1.example.com server webA server1.example.com:80 cookie A check inter 2s server webB server2.example.com:80 cookie B check inter 2s backend be_vhost2 mode http balance roundrobin option httpclose timeout connect 5s timeout server 5s cookie SERVERID insert indirect option httpchk GET /index.html HTTP/1.1\r\nHost: vhost2.example.com server webA server1.example.com:80 cookie A check inter 2s server webB server2.example.com:80 cookie B check inter 2s If you don't follow what this is doing, then see if the documentation ( http://haproxy.1wt.eu/download/1.3/doc/configuration.txt) helps, or feel free to ask more questions. Graeme. On 17 August 2010 23:42, Roy Smith r...@panix.com wrote: Ah, OK, that's getting me closer. Thanks! Now I've got option httpchk GET /index.html HTTP/1.1\r\nHost: test1.cluster6.corp.amiestreet.com server webA test1.cluster6.corp.amiestreet.com:80 cookie A check inter 2s server webB test2.cluster6.corp.amiestreet.com:80 cookie B check inter 2s and it's sending the correct headers, at least for test1. The problem is that it's also sending Host: test1... to test2. I don't see how to configure it to send each host the correct header. On Aug 17, 2010, at 5:23 PM, Graeme Donaldson wrote: Hi Roy You simply need to send an HTTP 1.1 request with a Host: header in the http check, like this: option httpchk GET /index.html\r\nHost: vhost.example.com Graeme. On 17 August 2010 23:19, Roy Smith r...@panix.com wrote: I'm running HA-Proxy version 1.3.22 on Ubuntu Linux. I've got apache set up with two virtual hosts, and I want to use haproxy to round-robin between them. Ultimately, these virtual hosts will be on different machines, but for my testing environment, they're on the same box. I've got a config file I'm using for testing: global maxconn 100 listen webfarm cluster6:23000 mode http option httpclose balance roundrobin cookie SERVERID insert indirect timeout server 5s timeout client 5s timeout connect 5s option httpchk GET /index.html HTTP/1.0 server webA test1.cluster6.corp.amiestreet.com:80 cookie A check inter 2s server webB test2.cluster6.corp.amiestreet.com:80 cookie B check inter 2s As soon as I start haproxy up, I get: [WARNING] 228/171101 (20636) : Server webfarm/webA is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [WARNING] 228/171102 (20636) : Server webfarm/webB is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [ALERT] 228/171102 (20636) : proxy 'webfarm' has no server available! The problem seems to be that when it sends the HTTP requests to apache, it leaves out the Host: header. For example, strace shows that wget does: write(3, GET /index.html HTTP/1.0\r\nUser-Agent: Wget/1.12 (linux-gnu)\r\nAccept: */*\r\nHost: test1.cluster6.corp.amiestreet.com\r\nConnection: Keep-Alive\r\n\r\n, 142\ ) = 142 but haproxy just does: sendto(5, GET /index.html HTTP/1.0\r\n\r\n, 28, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 28 How do I get haproxy to work and play well with virtual hosts?
Re: Can't get server check to work with virtual hosts
Hi Roy You simply need to send an HTTP 1.1 request with a Host: header in the http check, like this: option httpchk GET /index.html\r\nHost: vhost.example.com Graeme. On 17 August 2010 23:19, Roy Smith r...@panix.com wrote: I'm running HA-Proxy version 1.3.22 on Ubuntu Linux. I've got apache set up with two virtual hosts, and I want to use haproxy to round-robin between them. Ultimately, these virtual hosts will be on different machines, but for my testing environment, they're on the same box. I've got a config file I'm using for testing: global maxconn 100 listen webfarm cluster6:23000 mode http option httpclose balance roundrobin cookie SERVERID insert indirect timeout server 5s timeout client 5s timeout connect 5s option httpchk GET /index.html HTTP/1.0 server webA test1.cluster6.corp.amiestreet.com:80 cookie A check inter 2s server webB test2.cluster6.corp.amiestreet.com:80 cookie B check inter 2s As soon as I start haproxy up, I get: [WARNING] 228/171101 (20636) : Server webfarm/webA is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [WARNING] 228/171102 (20636) : Server webfarm/webB is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [ALERT] 228/171102 (20636) : proxy 'webfarm' has no server available! The problem seems to be that when it sends the HTTP requests to apache, it leaves out the Host: header. For example, strace shows that wget does: write(3, GET /index.html HTTP/1.0\r\nUser-Agent: Wget/1.12 (linux-gnu)\r\nAccept: */*\r\nHost: test1.cluster6.corp.amiestreet.com\r\nConnection: Keep-Alive\r\n\r\n, 142\ ) = 142 but haproxy just does: sendto(5, GET /index.html HTTP/1.0\r\n\r\n, 28, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 28 How do I get haproxy to work and play well with virtual hosts?
Re: Load balancing ..
Hi What you are trying to achieve is usually called link aggregation or line bonding. This has nothing to do with the load balancing functionality provided by HAProxy. Try these links for more information: http://www.google.com/search?q=adsl+bonding http://www.google.com/search?q=adsl+aggregation Regards, Graeme. On 14 July 2010 16:47, Sabz Host Co. sabzh...@gmail.com wrote: Hello Sir I search load balancing and i found You .. I have 4 adsl account (pppoe) from same ISP with 2mbps means if i download a file i can download with speed 210kb per secound . if i run load balancing these 4 line with your sotware , can i download one file with speed 840kb Per secound ? if You have any sulotion please help me . i very need that Thanks -- Khoshhal Mishavim Pasokhgoie Soalate Shoma Bashim . Ba Sepase Faravan 09153127743 boofe_bi...@yahoo.com -( Maybe Online For Live Chat) Sabz Host Company www.sabzhost.com
Limit to number of items in an ACL matching src IP
Hi I'm playing around with something like this: acl src_goaway src 10.0.0.1 redirect location http://example.com/goaway.html I have seen examples in the docs where src is specified as multiple IPs in a single ACL, but I don't see any mention of how many IPs can be in a single ACL. Did I miss this in the docs? Thanks, Graeme.