Re: odd bug in 1.5-dev4

2011-03-23 Thread Cory Forsyth
Yes, that was the fix, thank you.

On Tue, Mar 22, 2011 at 6:55 PM, Cyril Bonté cyril.bo...@free.fr wrote:

 Hi Cory,

 Le mardi 22 mars 2011 23:45:41, Cory Forsyth a écrit :
  I'm running 1.5-dev4 on ubuntu (linux26 target) with the following config
 (...)
  When I run it with haproxy -f config_file -d and then do a curl
  localhost in another terminal, I get a screenful of this:
 
  0010:http_proxy.clireq[0023:]: GET / HTTP/1.1
  0010:http_proxy.clihdr[0023:]: User-Agent: curl/7.19.7
  (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3
  libidn/1.15 0010:http_proxy.clihdr[0023:]: Host: localhost
  0010:http_proxy.clihdr[0023:]: Accept: */*
  (...)
 
  I have the full output from haproxy; let me know if you need more
  information for testing.

 This looks like the bug reported yesterday about a loop in haproxy 1.5-dev4
 and fixed by David de Colombier.

 Can you try to apply this patch and try again ?

 http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=64e9c90e69cd8b0fe8dd60024ccbe528705fbd8f

 --
 Cyril Bonté



Rewrite request URI based on Host header

2011-03-23 Thread Dorin Cornea
Hey guys,
I would like to set up HAProxy to forward HTTP requests to several backend 
servers but I need it to also rewrite the URI part based on the Host header. 
I've read through the doc but it seems that reqirep isn't suitable for this 
purpose. Any idea if this is even possible with HAProxy? If it's not possible 
using the current code can you offer any suggestions about how a code patch 
could be developed to achieve this?
What I'd like to set up would be to have HAProxy rewrite a request like this:

GET /original-uri HTTP/1.1
Host: original-domain.tld

to:

GET /domains/original-domain.tld/original-uri HTTP/1.1
Host: serverN

and forward that request to one of the internal servers. 

Thanks you,
Dorin



minconn, maxconn and fullconn

2011-03-23 Thread James Bardin
Hello,

I've been going through haproxy in depth recently, but I can't quite
figure out the details with full, min, and maxconn.

First of all, fullconn confuses me, and this example doesn't help

  Example :
 # The servers will accept between 100 and 1000 concurrent connections each
 # and the maximum of 1000 will be reached when the backend reaches 1
 # connections.
 backend dynamic
fullconn   1
server srv1   dyn1:80 minconn 100 maxconn 1000
server srv2   dyn2:80 minconn 100 maxconn 1000

What's the point of the fullconn 1 here? Won't the servers
already be maxed out at 2000 connections, and already at their
respective maximums long before 1 connections are made?

Is using minconn+maxconn+fullconn simply to give finer grained control
over resource allocation than you could get with the load-balancing
algo + weights? Is there a common use case for minconn, or is it one
of those options the majority users never need?


Maxconn can be declared in defaults, frontend, listen, under server,
and global as well. Does the first limit hit take priority; e.g. if I
set maxconn 10 in global, are my *total* connections for everything
limited to 10? Should I set:
  (global maxconn) = sum(frontend maxconns) = sum(server maxconns)



Thanks!
-jim



maxconn vs. option httpchk

2011-03-23 Thread Cassidy, Bryan
Hi all,

I've noticed an odd (lack of) interaction between maxconn and option 
httpchk... 

If a server's maxconn limit has been reached, it appears that HTTP health 
checks are still dispatched. If I've configured the maxconn limit to match the 
number of requests the backend server can concurrently dispatch, and all these 
connections are busy with slow requests, HAProxy will assume the server is 
down; once the server completes a request, HAProxy waits until rise health 
checks have succeeded (as expected if the server was really down, but it was 
only busy). This makes overly busy times even worse.

I'm not sure if this explanation is clear; perhaps a concrete configuration 
might help.

listen load_balancer
bind :80
mode http

balance leastconn
option httpchk HEAD /healthchk
http-check disable-on-404

default-server port 8080 inter 2s rise 2 fall 1 maxconn 3
server srv1 srv1.example.com:8080 check
server srv2 srv2.example.com:8080 check

With the above toy example, if each of srv1 and srv2 can only respond to 3 
requests concurrently, and 6 slow requests come in (each taking more than 2 
seconds), both backend servers will be considered down until up to 4 seconds in 
the worst case (inter 2s * rise 2) after one of the requests finishes.

I know I can work around this by setting maxconn to one less than a server's 
maximum capacity (perhaps this would be a good idea for other reasons). I 
suspect I could work around this by using TCP status checks instead of HTTP 
status checks, though I haven't tried this as I like the flexibility HTTP 
health checks offer (like disable-on-404).

Is this behavior a bug or a feature? Intuitively I would have expected the HTTP 
health checks to respect maxconn limits, but perhaps there was a conscious 
decision to not do so (for instance, maybe it was considered unacceptable for a 
server's health to be unknown when it is fully loaded).

Thanks,
Bryan