Re: Some thoughts about redispatch

2014-05-26 Thread Dmitry Sivachenko
On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:

> Hello!
> 
> If haproxy can't send a request to the backend server, it will retry the same
> backend 'retries' times waiting 1 second between retries, and if 'option
> redispatch' is used, the last retry will go to another backend.
> 
> There is (I think very common) usage scenario when
> 1) all requests are independent of each other and all backends are equal, so
> there is no need to try to route requests to the same backend (if it failed, 
> we
> will try dead one again and again while another backend could serve the 
> request
> right now)
> 
> 2) there is response time policy for requests and 1 second wait time is just
> too long (all requests are handled faster than 500ms and client software will
> not wait any longer).
> 
> I propose to introduce new parameters in config file:
> 1) "redispatch always": when set, haproxy will always retry different backend
> after connection to the first one fails.
> 2) Allow to override 1 second wait time between redispatches in config file
> (including the value of 0 == immediate).
> 
> Right now I use the attached patch to overcome these restrictions.  It is ugly
> hack right now, but if you could include it into distribution in better form
> with tuning via config file I think everyone would benefit from it.
> 
> Thanks.
> 



On 26 мая 2014 г., at 18:21, Willy Tarreau  wrote:
> I think it definitely makes some sense. Probably not in its exact form but
> as something to work on. In fact, I think we should only apply the 1s retry
> delay when remaining on the same server, and avoid as much a possible to
> remain on the same server. For hashes or when there's a single server, we
> have no choice, but when doing round robin for example, we can pick another
> one. This is especially true for static servers or ad servers for example
> where fastest response time is preferred over sticking to the same server.
> 


Yes, that was exactly my point.  In many situations it is better to ask another 
server immediately to get fastest response rather than trying to stick to the 
same server as much as possible.


> 
> Thanks,
> Willy



Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Looks like attach got stripped, attaching now for real so it is easy to 
understand what I am talking about.

--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s->target) && si->conn_retries == 0 &&
+   if (objt_server(s->target) &&
s->be->options & PR_O_REDISP && !(s->flags & SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s->target), s->be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
-   si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
        return 0;
}
return 0;

On 12 мая 2014 г., at 0:31, Dmitry Sivachenko  wrote:

> Hello,
> 
> thanks for your efforts on stabilizing -dev version, it looks rather solid 
> now.
> 
> Let me try to revive an old topic in hope to get rid of my old local patch I 
> must use for production builds.
> 
> Thanks :)
> 
> 
> 
> On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:
> 
>> Hello!
>> 
>> If haproxy can't send a request to the backend server, it will retry the same
>> backend 'retries' times waiting 1 second between retries, and if 'option
>> redispatch' is used, the last retry will go to another backend.
>> 
>> There is (I think very common) usage scenario when
>> 1) all requests are independent of each other and all backends are equal, so
>> there is no need to try to route requests to the same backend (if it failed, 
>> we
>> will try dead one again and again while another backend could serve the 
>> request
>> right now)
>> 
>> 2) there is response time policy for requests and 1 second wait time is just
>> too long (all requests are handled faster than 500ms and client software will
>> not wait any longer).
>> 
>> I propose to introduce new parameters in config file:
>> 1) "redispatch always": when set, haproxy will always retry different backend
>> after connection to the first one fails.
>> 2) Allow to override 1 second wait time between redispatches in config file
>> (including the value of 0 == immediate).
>> 
>> Right now I use the attached patch to overcome these restrictions.  It is 
>> ugly
>> hack right now, but if you could include it into distribution in better form
>> with tuning via config file I think everyone would benefit from it.
>> 
>> Thanks.
>> 
> 



Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Hello,

thanks for your efforts on stabilizing -dev version, it looks rather solid now.

Let me try to revive an old topic in hope to get rid of my old local patch I 
must use for production builds.

Thanks :)



On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:

> Hello!
> 
> If haproxy can't send a request to the backend server, it will retry the same
> backend 'retries' times waiting 1 second between retries, and if 'option
> redispatch' is used, the last retry will go to another backend.
> 
> There is (I think very common) usage scenario when
> 1) all requests are independent of each other and all backends are equal, so
> there is no need to try to route requests to the same backend (if it failed, 
> we
> will try dead one again and again while another backend could serve the 
> request
> right now)
> 
> 2) there is response time policy for requests and 1 second wait time is just
> too long (all requests are handled faster than 500ms and client software will
> not wait any longer).
> 
> I propose to introduce new parameters in config file:
> 1) "redispatch always": when set, haproxy will always retry different backend
> after connection to the first one fails.
> 2) Allow to override 1 second wait time between redispatches in config file
> (including the value of 0 == immediate).
> 
> Right now I use the attached patch to overcome these restrictions.  It is ugly
> hack right now, but if you could include it into distribution in better form
> with tuning via config file I think everyone would benefit from it.
> 
> Thanks.
> 




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-10 Thread Dmitry Sivachenko

On 07 мая 2014 г., at 18:24, Emeric Brun  wrote:
> 
> Hi All,
> 
> I suspect FreeBSD to not support process shared mutex (supported in both 
> linux and solaris).
> 
> I've just made a patch to add errors check on mutex init, and to fallback on 
> SSL private session cache in error case.


Hello,

BTW, nginx does support shared SSL session cache on FreeBSD (probably by other 
means).
May be it is worth to borrow their method rather than falling back to private 
cache?


Re: Patch with some small memory usage fixes

2014-04-28 Thread Dmitry Sivachenko
Hello,

> if (groups) free(groups);

I think these checks are redundant, because according to free(3):
-- If ptr is NULL, no action occurs.


On 29 апр. 2014 г., at 3:00, Dirkjan Bussink  wrote:

> Hi all,
> 
> When building HAProxy using the Clang Static Analyzer, it found a few cases 
> of invalid memory usage and leaks. I’ve attached a patch to fix these cases.
> 
> — 
> Regards,
> 
> Dirkjan Bussink
> 
> <0001-Fix-a-few-memory-usage-errors.patch>




Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 14:53, Baptiste  wrote:

> Hi All,
> 
> "When next request(s) arrive, server must to read missing pages back
> from disk.  It takes time.  Server becomes very slow for some time.
> I don't want it to be flooded by requests until it starts to respond
> fast again.  It looks like leastconn would fit this situation."
> 
> If one server is answering at 1s per request while the other one at
> 1ms in a farm of 2 servers, then server 2 will process 1000 more
> requests per second than server 1 thanks to leastconn...
> This is what you want.



Yes, provided most of the time they both answer 1ms and also that farm has not 
2 but 50 servers.
If one is ill, it's load will spread over the remaining 49...  not so scaring.

I am in the process of reading about maxconn as suggested, probably it is what 
I need but for now I am failing to understand the documentation :)


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 13:02, Willy Tarreau  wrote:

> On Fri, Mar 07, 2014 at 01:01:04PM +0400, Dmitry Sivachenko wrote:
>>>> Now it behaves almost this way but without  "honoring specified weights".
>>> 
>>> We cannot honnor both at the same time. Most products I've tested don't
>>> *even* do the round robin on equal connection counts while we do. I'm just
>>> restating the point I made in another thread on the same subject : leastconn
>>> is about balancing the active number of connections, not the total number of
>>> connections.
>> 
>> 
>> Yes, I understand that.
>> 
>> But in situation when backends are not equal, it would be nice to have an
>> ability to specify "weight" to balance number of *active* connections
>> proportional to backend's weight.
> 
> It's not a problem of option but of algorithm unfortunately.
> 
>> Otherwise I am forced to maintain a pool of backends with equal hardware for
>> leastconn to work, but it is not always simple.
> 
> I really don't understand. I really think you're using leastconn while
> you'd prefer to use roundrobin then.
> 


I will explain: imagine the backend server which mmap()s a lot of data needed 
to process a request.
On startup, data is read from disk into RAM and server responds fast 
(roundrobin works fine).

Now imagine that at some moment part of that mmap()ed memory is being freed for 
other needs.

When next request(s) arrive, server must to read missing pages back from disk.  
It takes time.  Server becomes very slow for some time.
I don't want it to be flooded by requests until it starts to respond fast 
again.  It looks like leastconn would fit this situation.

But 99.9% of time, when all servers respond equally fast, I want to be able to 
balance load between them proportionally to their CPU number (so I need 
weights).




Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 12:25, Willy Tarreau  wrote:

> Hi Dmitry,
> 
> On Fri, Mar 07, 2014 at 12:16:32PM +0400, Dmitry Sivachenko wrote:
>> 
>> On 06 ?? 2014 ??., at 19:29, Dmitry Sivachenko  
>> wrote:
>> 
>>> Hello!
>>> 
>>> I am using haproxy-1.5.22.
>>> 
>>> In a single backend I have servers with different weight configured: 16, 
>>> 24, 32 (proportional to the number of CPU cores).
>>> Most of the time they respond very fast.
>>> 
>>> When I use balance leastconn, I see in the stats web interface that they 
>>> all receive approximately equal number of connections (Sessions->Total).
>>> Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
>>> backend with minimal Connections/weight value)?
>>> 
>>> Thanks.
>> 
>> I mean that with balance leastconn, I expect the following behavior:
>> -- In ideal situation, when all backends respond equally fast, it should be
>> effectively like balance roundrobin *honoring specified weights*;
>> -- When one of the backends becomes slow for some reason, it should get less
>> request based on the number of active connections
>> 
>> Now it behaves almost this way but without  "honoring specified weights".
> 
> We cannot honnor both at the same time. Most products I've tested don't
> *even* do the round robin on equal connection counts while we do. I'm just
> restating the point I made in another thread on the same subject : leastconn
> is about balancing the active number of connections, not the total number of
> connections.


Yes, I understand that.

But in situation when backends are not equal, it would be nice to have an 
ability to specify "weight" to balance number of *active* connections 
proportional to backend's weight.

Otherwise I am forced to maintain a pool of backends with equal hardware for 
leastconn to work, but it is not always simple.


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 06 марта 2014 г., at 19:29, Dmitry Sivachenko  wrote:

> Hello!
> 
> I am using haproxy-1.5.22.
> 
> In a single backend I have servers with different weight configured: 16, 24, 
> 32 (proportional to the number of CPU cores).
> Most of the time they respond very fast.
> 
> When I use balance leastconn, I see in the stats web interface that they all 
> receive approximately equal number of connections (Sessions->Total).
> Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
> backend with minimal Connections/weight value)?
> 
> Thanks.


I mean that with balance leastconn, I expect the following behavior:
-- In ideal situation, when all backends respond equally fast, it should be 
effectively like balance roundrobin *honoring specified weights*;
-- When one of the backends becomes slow for some reason, it should get less 
request based on the number of active connections

Now it behaves almost this way but without  "honoring specified weights".





balance leastconn does not honor weight?

2014-03-06 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5.22.

In a single backend I have servers with different weight configured: 16, 24, 32 
(proportional to the number of CPU cores).
Most of the time they respond very fast.

When I use balance leastconn, I see in the stats web interface that they all 
receive approximately equal number of connections (Sessions->Total).
Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
backend with minimal Connections/weight value)?

Thanks.


Re: ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko

On 30 янв. 2014 г., at 19:30, Baptiste  wrote:

> Hu Dmitry,
> 
> In Post, the parameters are in the body.
> You may be able to match them using the payload ACLs (HAProxy 1.5 only).
> 


Hello,

I tried
acl PARAM1 payload(0,500) -m sub test=1
use_backend BE1-back if PARAM1


and it does not match
(I test with curl -d test=1 http://...)






ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko
Hello!

(haproxy-1.5-dev21)


Using urlp() I can match specific parameter value and dispatch request to 
different backends based on that value:

acl PARAM1 urlp(test) 1
use_backend BE1-back if PARAM1
acl PARAM2 urlp(test) 2
use_backend BE2-back if PARAM2

It works if I specify that parameter using GET method:
curl 'http://localhost:2/do?test=1'

But it does not work if I specify the same parameter using POST method:
curl -d test=1  'http://localhost:2/do'

Is there any way to make ACLs using request parameters regardless of method, so 
that it works with both GET and POST?

Thanks!


Re: RES: RES: RES: RES: RES: RES: RES: RES: High CPU Usage (HaProxy)

2013-11-05 Thread Dmitry Sivachenko
On 05 нояб. 2013 г., at 19:33, Fred Pedrisa  wrote:

> 
> However, in FreeBSD we can't do that IRQ Assigning, like we can on linux.
> (As far I know).
> 


JFYI: you can assign IRQs to CPUs via cpuset -x 
(I can’t tell you if it is “like on linux” or not though).




Re: compile warning

2013-05-23 Thread Dmitry Sivachenko

On 23.05.2013, at 11:22, joris dedieu  wrote:

> 
> For my part I can't reproduce it.
> 
> $ uname -a
> FreeBSD mailhost2 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29
> 18:27:25 UTC 2013
> r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
> 
> $ cc -v
> Using built-in specs.
> Target: amd64-undermydesk-freebsd
> Configured with: FreeBSD/amd64 system compiler
> Thread model: posix
> gcc version 4.2.1 20070831 patched [FreeBSD]
> 
> 
> rm src/ev_kqueue.o; cc -Iinclude -Iebtree -Wall -Werror -O2 -pipe -O2
> -fno-strict-aliasing -pipe -DFREEBSD_PORTS -DTPROXY -DCONFIG_HAP_CRYPT
> -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POLL -DENABLE_KQUEUE
> -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include
> -DCONFIG_HAPROXY_VERSION=\"1.5-dev18\"
> -DCONFIG_HAPROXY_DATE=\"2013/04/03\" -c -o src/ev_kqueue.o
> src/ev_kqueue.c
> 
> Doesn't produce any warning with haproxy-ss-20130515.
> 
> Could you please tell me how to reproduce it ?
> 


Update to FreeBSD-9-STABLE if you want to reproduce it.

This change was MFC'd to 9/stable after 9.1-RELEASE:
http://svnweb.freebsd.org/base/stable/9/sys/sys/queue.h?view=log




compile warning

2013-05-22 Thread Dmitry Sivachenko
Hello!

When compiling the latest haproxy snapshot on FreeBSD-9 I get the following 
warning:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe   -DFREEBSD
_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POL
L -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include  -DCONFIG_HAPROX
Y_VERSION=\"1.5-dev18\" -DCONFIG_HAPROXY_DATE=\"2013/04/03\" -c -o src/ev_kqueue
.o src/ev_kqueue.c
In file included from include/types/listener.h:33,
 from include/types/global.h:29,
 from src/ev_kqueue.c:30:
include/common/mini-clist.h:141:1: warning: "LIST_PREV" redefined
In file included from /usr/include/sys/event.h:32,
 from src/ev_kqueue.c:21:
/usr/include/sys/queue.h:426:1: warning: this is the location of the previous 
definition

JFYI.


Re: haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko

On 15.03.2013, at 15:54, Willy Tarreau  wrote:

> Hi Dmitry,
> 
> On Fri, Mar 15, 2013 at 03:25:10PM +0400, Dmitry Sivachenko wrote:
>> Hello!
>> 
>> I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
>> IPs.
>> If DNS is not working, haproxy will dump core on start or config check.
>> 
>> How to repeat:
>> Put some fake stuff in /etc/resolv.conf so resolver does not work.
>> 
>> Run haproxy -c -f :
>> 
>> /tmp# ./haproxy -c -f ./haproxy.conf
>> Segmentation fault (core dumped)
> 
> This is a known issue with GETADDRINFO which was fixed in a
> recent snapshot :
> 
>  commit 58ea039115f3faaf29529e0df97f4562436fdd09
>  Author: Sean Carey 
>  Date:   Fri Feb 15 23:39:18 2013 +0100
> 
>BUG/MEDIUM: config: fix parser crash with bad bind or server address
> 
>If an address is improperly formated on a bind or server address
>and haproxy is built for using getaddrinfo, then a crash may occur
>upon the call to freeaddrinfo().
> 
>Thanks to Jon Meredith for helping me patch this for SmartOS,
>I am not a C/GDB wizard.
> 
> I think you'd better update to latest snapshot until we emit dev18.
> 



Ah, okay, thanks!




haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
IPs.
If DNS is not working, haproxy will dump core on start or config check.

How to repeat:
Put some fake stuff in /etc/resolv.conf so resolver does not work.

Run haproxy -c -f :

/tmp# ./haproxy -c -f ./haproxy.conf
Segmentation fault (core dumped)

# ./haproxy -vv
HA-Proxy version 1.5-dev17 2012/12/28
Copyright 2000-2012 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8x 10 May 2012
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.




Re: compress only if response size is big enough

2013-03-02 Thread Dmitry Sivachenko
Hello!

What do you guys think?

I meant something similar to nginx's  gzip_min_length.


On 07.02.2013, at 15:56, Dmitry Sivachenko  wrote:

> Hello!
> 
> It would be nice to add some parameter .
> So haproxy will compress HTTP response only if response size is bigger than 
> that value.
> 
> Because compressing small data can lead to size increase and is useless.
> 
> Thanks.




compress only if response size is big enough

2013-02-07 Thread Dmitry Sivachenko
Hello!

It would be nice to add some parameter .
So haproxy will compress HTTP response only if response size is bigger than 
that value.

Because compressing small data can lead to size increase and is useless.

Thanks.


max sessions rate on stat page

2012-12-27 Thread Dmitry Sivachenko
Hello!

Every time I reload haproxy process, I see on stat page "max session rate" for 
backends and frontends
2-3 times higher than "current session rate".

Even if I open stat page few seconds after reload, and it is 100% reproducible.

Seems like haproxy calculates these numbers incorrect during first second of 
running.


Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-26 Thread Dmitry Sivachenko

On 26.12.2012, at 1:03, Willy Tarreau  wrote:
>> 
> 
> This fix is still wrong, as it only accepts one add-header rule, so
> please use the other fix posted in this thread by "seri0528" instead.
> 


Thanks a lot! Works now.




Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-24 Thread Dmitry Sivachenko
Hello!

After update from -dev15, the following stats listener:

listen stats9 :30009
mode http
stats enable
stats uri /
stats show-node
stats show-legends

returns 503/Service unavailable.

With -dev15 it shows statistics page.


On 24.12.2012, at 19:51, Willy Tarreau  wrote:

> Hi all,
> 
> Here comes 1.5-dev16. Thanks to the amazing work Sander Klein and John
> Rood have done at Picturae ICT ( http://picturae.com/ ) we could finally
> spot the freeze bug after one week of restless digging ! This bug was
> amazingly hard to reproduce in general and would only affect POST requests
> under certain circumstances that I never could reproduce despite many
> efforts. It is likely that other users were affected too but did not
> notice it because end users did not complain (I'm thinking about webmail
> and file sharing environments for example).
> 
> During this week of code review and testing, around 10 other minor to medium
> bugs related to the polling changes could be fixed.
> 
> Another nasty bug was fixed on SSL. It happens that OpenSSL maintains a
> global error stack that must constantly be flushed (surely they never heard
> how errno works). The result is that some SSL errors could cause another SSL
> session to break as a side effect of this error. This issue was reported by
> J. Maurice (wiz technologies) who first encountered it when playing with the
> tests on ssllabs.com.
> 
> Another bug present since 1.4 concerns the premature close of the response
> when the server responds before the end of a POST upload. This happens when
> the server responds with a redirect or with a 401, sometimes the client would
> not get the response. This has been fixed.
> 
> Krzysztof Rutecki reported some issues on client certificate checks, because
> the check for the presence of the certificate applies to the connection and
> not just to the session. So this does not match upon session resumption. Thus
> another ssl_c_used ACL was added to check for such sessions.
> 
> Among the other nice additions, it is now possible to log the result of any
> sample fetch method using %[]. This allows to log SSL certificates for 
> example.
> And similarly, passing such information to HTTP headers was implemented too,
> as "http-request add-header" and "http-request set-header", using the same
> format as the logs. This also becomes useful for combining headers !
> 
> Some people have been asking for logging the amount of uploaded data from the
> client to the server, so this is now available as the %U log-format tag.
> Some other log-format tags were deprecated and replaced with easier to remind
> ones. The old ones still work but emit a warning suggesting the replacement.
> 
> And last, the stats HTML version was improved to present detailed information
> using hover tips instead of title attributes, allowing multi-line details on
> the page. The result is nicer, more readable and more complete.
> 
> The changelog is short enough to append it here after the usual links :
> 
>Site index   : http://haproxy.1wt.eu/
>Sources  : http://haproxy.1wt.eu/download/1.5/src/devel/
>Changelog: http://haproxy.1wt.eu/download/1.5/src/CHANGELOG
>Cyril's HTML doc : 
> http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
> 
> At the moment, nobody broke the latest snapshots, so I think we're getting
> closer to something stable to base future work on.
> 
> Thanks!
> Willy
> 
> --
> Changelog from 1.5-dev15 to 1.5-dev16:
>  - BUG/MEDIUM: ssl: Prevent ssl error from affecting other connections.
>  - BUG/MINOR: ssl: error is not reported if it occurs simultaneously with 
> peer close detection.
>  - MINOR: ssl: add fetch and acl "ssl_c_used" to check if current SSL session 
> uses a client certificate.
>  - MINOR: contrib: make the iprange tool grep for addresses
>  - CLEANUP: polling: gcc doesn't always optimize constants away
>  - OPTIM: poll: optimize fd management functions for low register count CPUs
>  - CLEANUP: poll: remove a useless double-check on fdtab[fd].owner
>  - OPTIM: epoll: use a temp variable for intermediary flag computations
>  - OPTIM: epoll: current fd does not count as a new one
>  - BUG/MINOR: poll: the I/O handler was called twice for polled I/Os
>  - MINOR: http: make resp_ver and status ACLs check for the presence of a 
> response
>  - BUG/MEDIUM: stream-interface: fix possible stalls during transfers
>  - BUG/MINOR: stream_interface: don't return when the fd is already set
>  - BUG/MEDIUM: connection: always update connection flags prior to computing 
> polling
>  - CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
>  - BUG/MAJOR: stream_interface: fix occasional data transfer freezes
>  - BUG/MEDIUM: stream_interface: fix another case where the reader might not 
> be woken up
>  - BUG/MINOR: http: don't abort client connection on premature responses
>  - BUILD: no need to clean up when making git-tar
>  - MINOR: log: add a tag for am

unlink local sockets upon exit

2012-12-11 Thread Dmitry Sivachenko
Hello!

Why haproxy does not unlink local sockets (stats socket, other local sockets if 
there are frontends bound to
local unix socket) upon exit?

Is there any special reason not to do it?

Thanks!


Some thoughts about redispatch

2012-11-28 Thread Dmitry Sivachenko
Hello!

If haproxy can't send a request to the backend server, it will retry the same
backend 'retries' times waiting 1 second between retries, and if 'option
redispatch' is used, the last retry will go to another backend.

There is (I think very common) usage scenario when
1) all requests are independent of each other and all backends are equal, so
there is no need to try to route requests to the same backend (if it failed, we
will try dead one again and again while another backend could serve the request
right now)

2) there is response time policy for requests and 1 second wait time is just
too long (all requests are handled faster than 500ms and client software will
not wait any longer).

I propose to introduce new parameters in config file:
1) "redispatch always": when set, haproxy will always retry different backend
after connection to the first one fails.
2) Allow to override 1 second wait time between redispatches in config file
(including the value of 0 == immediate).

Right now I use the attached patch to overcome these restrictions.  It is ugly
hack right now, but if you could include it into distribution in better form
with tuning via config file I think everyone would benefit from it.

Thanks.
--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s->target) && si->conn_retries == 0 &&
+   if (objt_server(s->target) &&
s->be->options & PR_O_REDISP && !(s->flags & SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s->target), s->be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
-   si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
return 0;
}
return 0;


Re: Need more info on compression

2012-11-28 Thread Dmitry Sivachenko
On 24.11.2012 18:25, Willy Tarreau wrote:
> Hi Dmitry,
> 
> On Thu, Nov 22, 2012 at 08:03:26PM +0400, Dmitry Sivachenko wrote:
>> Hello!
>>
>> I was reading docs about HTTP compression support in -dev13 and it is a bit
>> unclear to me how it works.
>>
>> Imagine I have:
>> compression algo gzip
>> compression type text/html text/javascript text/xml text/plain
>>
>> in defaults section.
>>
>> What will haproxy do if:
>> 1) backend server does NOT support compression;
> 
> Haproxy will compress the matching responses.
> 
>> 2) backend server does support compression;
> 
> You have two possibilities :
>   - either you just have the lines above, and the server will see
> the Accept-Encoding header from the client and will compress
> the response ; in this case, haproxy will see the compressed
> response and will not compress again ;
> 
>   - or you also have a "compression offload" line. In this case,
> haproxy will remove the "Accept-Encoding" header before passing
> the request to the server. The server will then *not* compress,
> and haproxy will compress the response. This is what I'm doing
> at home because the compressing server is bogus and sometimes
> emits wrong chunked encoded data!
> 
>> 3) backend server does support compression and there is no these two
>> compression* lines in haproxy config.
> 
> Then haproxy's normal behaviour remains unchanged, the server compresses
> if it wants to and haproxy transfers the response unmodified.
> 
>> I think documentation needs to clarify things a bit.
> 
> Possibly, however I don't know what to clarify nor how, it's always
> difficult to guess how people will understand a doc :-(
> 
> Could you please propose some changes ? I would be happy to improve
> the doc if it helps people understand it.
> 


Thank you very much for the explanation.

Please consider the attached patch, I hope it will clarify haproxy's behavior a
bit.

--- configuration.txt.orig  2012-11-26 06:11:05.0 +0400
+++ configuration.txt   2012-11-28 17:45:25.0 +0400
@@ -1903,16 +1903,23 @@
 
   Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
+  If backend servers support HTTP compression, these directives
+  will be no-op: haproxy will see the compressed response and will not
+  compress again. If backend servers do not support HTTP compression and
+  there is Accept-Encoding header in request, haproxy will compress the
+  matching response.
 
   The "offload" setting makes haproxy remove the Accept-Encoding header to
   prevent backend servers from compressing responses. It is strongly
   recommended not to do this because this means that all the compression work
   will be done on the single point where haproxy is located. However in some
   deployment scenarios, haproxy may be installed in front of a buggy gateway
-  and need to prevent it from emitting invalid payloads. In this case, simply
-  removing the header in the configuration does not work because it applies
-  before the header is parsed, so that prevents haproxy from compressing. The
-  "offload" setting should then be used for such scenarios.
+  with broken HTTP compression implementation which can't be turned off.
+  In that case haproxy can be used to prevent that gateway from emitting
+  invalid payloads. In this case, simply removing the header in the
+  configuration does not work because it applies before the header is parsed,
+  so that prevents haproxy from compressing. The "offload" setting should
+  then be used for such scenarios.
 
   Compression is disabled when:
 * the server is not HTTP/1.1.


Re: -dev13 dumps core on reload

2012-11-22 Thread Dmitry Sivachenko
On 23.11.2012 11:18, Willy Tarreau wrote:
> I'd be interested in knowing if your config enables compression, because
> that's an area where we very recently introduced new pools, so there could
> be a relation.
> 


It does, but it does not matter: when I comment compression out it also dumps
core.  And it does not relate to graceful restart (haproxy -sf), it dump core
on normal exit too.

I have setups with and without SSL too: coredump does not depend on SSL.



Need more info on compression

2012-11-22 Thread Dmitry Sivachenko
Hello!

I was reading docs about HTTP compression support in -dev13 and it is a bit
unclear to me how it works.

Imagine I have:
compression algo gzip
compression type text/html text/javascript text/xml text/plain

in defaults section.

What will haproxy do if:
1) backend server does NOT support compression;
2) backend server does support compression;
3) backend server does support compression and there is no these two
compression* lines in haproxy config.

I think documentation needs to clarify things a bit.

In return, I am attaching a small patch which fixes 2 typos.

Thanks!
--- configuration.txt.orig  2012-11-22 04:11:33.0 +0400
+++ configuration.txt   2012-11-22 19:58:46.0 +0400
@@ -1887,7 +1887,7 @@
 offload  makes haproxy work as a compression offloader only (see notes).
 
   The currently supported algorithms are :
-identity  this is mostly for debugging, and it was useful for developping
+identity  this is mostly for debugging, and it was useful for developing
   the compression feature. Identity does not apply any change on
   data.
 
@@ -1901,7 +1901,7 @@
   This setting is only available when support for zlib was built
   in.
 
-  Compression will be activated depending of the Accept-Encoding request
+  Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
 
   The "offload" setting makes haproxy remove the Accept-Encoding header to


Re: option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

On 24.10.2012 19:13, Jonathan Matthews wrote:

On 24 October 2012 16:03, Dmitry Sivachenko  wrote:

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned
on (the default).


Do you actually mean "off" here?



Yes, sorry.





It seems that haproxy successfully validates requests with unencoded '%'
characted in it:

http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400
(Bad Request) in such cases.

Is it a bug or am I missing something?


Percentage signs are valid in URIs. Your application could be doing
/anything/ with them; HAProxy doesn't know what.
I don't /believe/ it's a validating parser's job to disallow these -
it sounds like you want more of a WAF.



Well, at least from Wikipedia:
http://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_the_percent_character

Because the percent ("%") character serves as the indicator for percent-encoded 
octets, it must be percent-encoded as "%25" for that octet to be used as data 
within a URI.


When haproxy encounters, say, unencoded whitespace character, it returns HTTP 
400.  Why '%' should be an exception?






option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned on 
(the default).


It seems that haproxy successfully validates requests with unencoded '%' 
characted in it:


http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400 (Bad 
Request) in such cases.


Is it a bug or am I missing something?

Thanks!



Re: Dump of invalid requests

2012-10-21 Thread Dmitry Sivachenko

On 10/21/12 12:06 AM, Willy Tarreau wrote:

On Sun, Oct 21, 2012 at 12:01:10AM +0400, Dmitry Sivachenko wrote:

As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will
dump more that 16k of URL.

I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

Yes. The max request that can be read at once is bufsize-maxrewrite. And
since maxrewrite defaults to bufsize/2, I think you were limited to 16k
which is in the same range as your request.




Please consider the following patch for configuration.txt to clarify 
meaning of

bufsize, maxrewrite and the size of HTTP request which can be processed.

Thanks.

--- configuration.txt.orig  2012-08-14 11:09:31.0 +0400
+++ configuration.txt   2012-10-21 18:08:01.0 +0400
@@ -683,6 +683,8 @@
   statistics, and values larger than default size will increase memory 
usage,
   possibly causing the system to run out of memory. At least the 
global maxconn
   parameter should be decreased by the same factor as this one is 
increased.
+  If HTTP request is larger than tune.bufsize - tune.maxrewrite, 
haproxy will

+  return HTTP 400 (Bad Request) error.

 tune.chksize 
   Sets the check buffer size to this size (in bytes). Higher values 
may help

@@ -4346,8 +4348,8 @@
  # replace "www.mydomain.com" with "www" in the host name.
  reqirep ^Host:\ www.mydomain.com   Host:\ www

-  See also: "reqadd", "reqdel", "rsprep", section 6 about HTTP header
-manipulation, and section 7 about ACLs.
+  See also: "reqadd", "reqdel", "rsprep", "tune.bufsize", section 6 about
+HTTP header manipulation, and section 7 about ACLs.


 reqtarpit   [{if | unless} ]



Re: Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

On 10/20/12 11:49 PM, Willy Tarreau wrote:

Hello Dmitry,

On Sat, Oct 20, 2012 at 10:13:47PM +0400, Dmitry Sivachenko wrote:

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request
return code with the following command:
$ echo "show errors" | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large
GET requests I do not see the tail of URL string and (more important)
the following HTTP headers.  I am running with tune.bufsize=32768. Is it
possible to tune haproxy to dump the whole request?

It always dumps the whole request. What you're describing is a request
too large to fit in a buffer. It is invalid by definition since haproxy
cannot parse it fully. If you absolutely need to pass that large a
request, you can increase tune.bufsize and limit tune.maxrewrite to
1024, it will be more than enough. But be careful, a website running
with that large requests will 1) not be accessible by everyone for the
same reason (some proxies will block the request) and 2) will be
extremely slow for users with a limited uplink or via 3G/GPRS.


As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not 
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will 
dump more that 16k of URL.


I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

I am aware of problems 1) and 2) but we have some special service here 
at work which requires that large URLs.



2) The command above shows *the last* rejected request.  In some cases
it complicates debugging, it would be convenient to see dumps of all
rejected requests for later analysis.  Is it possible to enable logging
of these dumps to a file or syslog?

No, because haproxy does not access any file once started, and syslog
normally does not support messages larger than 1024 chars.

What is problematic with only the last request ? Can't you connect
more often to dump it ? There is an event number in the dump for that
exact purpose, that way you know if you have already seen it or not.




Problem is that you never know when next invalid request will arrive so 
it is possible to miss one no matter how ofter you poll for new errors.


Since most request should fit even into 1024 buffer, would be nice to 
dump at least first 1024 bytes via syslog for debugging.






Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request 
return code with the following command:

$ echo "show errors" | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large 
GET requests I do not see the tail of URL string and (more important) 
the following HTTP headers.  I am running with tune.bufsize=32768. Is it 
possible to tune haproxy to dump the whole request?


2) The command above shows *the last* rejected request.  In some cases 
it complicates debugging, it would be convenient to see dumps of all 
rejected requests for later analysis.  Is it possible to enable logging 
of these dumps to a file or syslog?


Thanks in advance!



haproxy-1.4.20 crashes

2012-05-15 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.20 on FreeBSD-9.
It was running without any problems for a long time, but after recent 
changes in configuration it began to crash from time to time.


GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...
Core was generated by `haproxy'.
Program terminated with signal 10, Bus error.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
422 EV_FD_SET(fd, DIR_WR);  /* for connect status */
(gdb) bt
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
#1  0x004449f7 in connect_server (s=0x806213200) at 
src/backend.c:921
#2  0x00457d98 in sess_update_stream_int (s=0x806213200, 
si=0x8062132e8)

at src/session.c:374
#3  0x0045a5e7 in process_session (t=0x8057fcb40) at 
src/session.c:1403

#4  0x0040b1e3 in process_runnable_tasks (next=0x7fffdaac)
at src/task.c:234
#5  0x004047a3 in run_poll_loop () at src/haproxy.c:983
#6  0x00404f61 in main (argc=6, argv=0x7fffdb88) at 
src/haproxy.c:1264

(gdb)

Is it a known issue?
If not, I can provide more information (config, core image, etc).

Thanks in advance!



Re: X-Forwarded-For header

2011-03-25 Thread Dmitry Sivachenko
On Thu, Mar 24, 2011 at 09:12:46PM +0100, Willy Tarreau wrote:
> Hello Dmitry,
> 
> On Thu, Mar 24, 2011 at 05:28:13PM +0300, Dmitry Sivachenko wrote:
> > Hello!
> > 
> > With "option forwardfor", haproxy adds X-Forwarded-For header at the end
> > of header list.
> > 
> > But according to wikipedia:
> > http://en.wikipedia.org/wiki/X-Forwarded-For
> > 
> > and other HTTP proxies (say, nginx)
> > there is standard format to specify several intermediate IP addresses:
> > X-Forwarded-For: client1, proxy1, proxy2
> > 
> > Why don't you use these standard procedure to add client IP?
> 
> Because these are not the standards. Standards are defined by RFCs, not
> by Wikipedia :-)


I meant more like "de-facto standard", sorry for the confusion.
The format with single comma-delimited X-Forwarded-For is just more common.


> 
> We already got this question anyway. The short answer is that both forms
> are strictly equivalent, and any intermediary is free to fold multiple
> header lines into a single one with values delimited by commas. Your
> application will not notice the difference (otherwise it's utterly
> broken and might possibly be sensible to many vulnerabilities such as
> request smugling attacks).
> 


Okay, thanks for the explanation.



X-Forwarded-For header

2011-03-24 Thread Dmitry Sivachenko
Hello!

With "option forwardfor", haproxy adds X-Forwarded-For header at the end
of header list.

But according to wikipedia:
http://en.wikipedia.org/wiki/X-Forwarded-For

and other HTTP proxies (say, nginx)
there is standard format to specify several intermediate IP addresses:
X-Forwarded-For: client1, proxy1, proxy2

Why don't you use these standard procedure to add client IP?
(I mean if X-Forwarded-For already exists in request headers, modify
its value with client IP and do not create another header with the same name).

Thanks!



X-Forwarded-For and option http-server-close

2011-03-21 Thread Dmitry Sivachenko
Hello!

We are using haproxy version 1.4.
We are trying to setup HTTP mode backend with support of HTTP keep-alive
between clients and haproxy.
For that reason we add "option http-server-close" in backend configuration.
But we also want to pass real client IP address in X-Forwarded-For header.
For that reason we add "option forwardfor" in backend configuration.

What we observe is that these two options do not work together.
If we enable HTTP keep-alive, haproxy stops adding X-Forwarded-For header.

>From the other hand, if we setup nginx as proxy server instead of haproxy,
we get both HTTP keep-alive and correct X-Forwarded-For header.

What are the reasons to not allow both freatures to work together?

Thanks in advance!



Re: haproxy-1.4.3 and keep-alive status

2010-04-26 Thread Dmitry Sivachenko
On Mon, Apr 26, 2010 at 03:41:03PM +0200, Cyril Bontй wrote:
> Hi Dmitry,
> 
> Le lundi 26 avril 2010 13:57:12, Dmitry Sivachenko a йcrit :
> > When I put that server behind haproxy (version 1.4.4) I see the following:
> > 
> > 
> > 1) GET  HTTP/1.1
> > Host: host.pp.ru
> > User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) 
> > Gecko/2010041
> > 4 Firefox/3.6.3
> > Accept: text/javascript, application/javascript, */*
> > Accept-Language: en-us,ru;q=0.7,en;q=0.3
> > Accept-Encoding: gzip,deflate
> > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> > Keep-Alive: 115
> > Connection: keep-alive
> > 
> > 2) 
> > HTTP/1.1 200 OK
> > Date: Mon, 26 Apr 2010 11:45:01 GMT
> > Expires: Mon, 26 Apr 2010 11:46:01 GMT
> > Content-Type: text/javascript; charset=utf-8
> > Connection: Close
> > 
> > 
> > 
> > I have
> > mode http
> > option http-server-close
> > option http-pretend-keepalive
> > 
> > in my config (tried both with and without http-pretend-keepalive).
> > 
> > Can you please explain in more detail what server makes wrong and why 
> > haproxy
> > adds Connection: Close header
> > (and why Firefox successfully uses HTTP keep-alive with the same server 
> > without
> > haproxy).
> 
> HAProxy can't accept the connection to be keep-alived as it doesn't provide a 
> Content-Length (nor the communication allows chunked transfer).
> Try to add a Content-Length header equal to your data length and Keep-Alive 
> should be accepted.
> 

Okay, I'll try, thanks for suggestion.

By the way: why haproxy behaves that way?  What is the technical problem to
allow HTTP keep-alive with chunked transfer?  AFAIK last chunk is specially
formed to indicate the end of data so haproxy should see the end of transmitted
data without Content-Length header?



Re: haproxy-1.4.3 and keep-alive status

2010-04-26 Thread Dmitry Sivachenko
On Thu, Apr 08, 2010 at 11:58:25AM +0200, Willy Tarreau wrote:
> > 3) I have sample configuration running with option http-server-close and 
> > without option httpclose set.
> > 
> > I observe the following at haproxy side:
> > 
> > Request comes:
> > 
> > GET / HTTP/1.1
> > Host: host.pp.ru
> > User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.2) 
> > Gecko/20100326 Firefox/3.6.2
> > Accept: */*
> > Accept-Language: en-us,ru;q=0.7,en;q=0.3
> > Accept-Encoding: gzip,deflate
> > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> > Keep-Alive: 115
> > Connection: keep-alive
> > 
> > So client requests keep-alive.  I suppose that haproxy should send request 
> > to 
> > backend with Connection: close (because http-server-close is set) but
> > send response to client with keep-alive enabled.
> 
> Exactly.
> 
> > But that does not happen:
> > 
> > HTTP/1.1 200 OK
> > Date: Thu, 08 Apr 2010 08:41:52 GMT
> > Expires: Thu, 08 Apr 2010 08:42:52 GMT
> > Content-Type: text/javascript; charset=utf-8
> > Connection: Close
> > 
> > jsonp1270715696732(["a", ["ab", "and", "a2", "ac", "are", "a a", "ad", "a 
> > b", "a1", "about"]])
> > 
> > 
> > Why haproxy responds to client with Connection: Close?
> 
> Because the server did not provide information required to make the keep-alive
> possible. In your case, there is no "content-length" nor any 
> "transfer-encoding"
> header, so the only way the client has to find the response end, is the 
> closure
> of the connection.
> 
> An exactly similar issue was identified on Tomcat and Jetty. They did not use
> transfer-encoding when the client announces it intends to close. The Tomcat
> team was cooperative and recently agreed to improve that. In the mean time,
> we have released haproxy 1.4.4 which includes a workaround for this : combine
> "option http-pretend-keepalive" with "option http-server-close" and your 
> server
> will believe you're doing keep-alive and may try to send a more appropriate
> response. At least this works with Jetty and Tomcat, though there is nothing
> mandatory in this area.
> 

Hello!

Here is a sample HTTP session with my (hand-made) server.

1) GET / HTTP/1.1
Host: hots.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:34:19 GMT
Expires: Mon, 26 Apr 2010 11:35:19 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Keep-Alive
Transfer-Encoding: chunked



tcpdump analysis of several subsequent requests shows that HTTP keep-alive works
in my case.

When I put that server behind haproxy (version 1.4.4) I see the following:


1) GET  HTTP/1.1
Host: host.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:45:01 GMT
Expires: Mon, 26 Apr 2010 11:46:01 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Close



I have
mode http
option http-server-close
option http-pretend-keepalive

in my config (tried both with and without http-pretend-keepalive).

Can you please explain in more detail what server makes wrong and why haproxy
adds Connection: Close header
(and why Firefox successfully uses HTTP keep-alive with the same server without
haproxy).

Thanks in advance!



haproxy-1.4.3 and keep-alive status

2010-04-08 Thread Dmitry Sivachenko
Hello!

I am tasting version 1.4.3 of haproxy and I am getting a bit confused with
HTTP keep-alive support status.

1) Is server-side HTTP keep-alive supported at all?
The existence of option http-server-close makes me beleive that it is
enabled unless that option is used.

2) Is it true that client side HTTP keep-alive is also enabled by default unless
option httpclose is used?

3) I have sample configuration running with option http-server-close and 
without option httpclose set.

I observe the following at haproxy side:

Request comes:

GET / HTTP/1.1
Host: host.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.2) 
Gecko/20100326 Firefox/3.6.2
Accept: */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

So client requests keep-alive.  I suppose that haproxy should send request to 
backend with Connection: close (because http-server-close is set) but
send response to client with keep-alive enabled. But that does not happen:

HTTP/1.1 200 OK
Date: Thu, 08 Apr 2010 08:41:52 GMT
Expires: Thu, 08 Apr 2010 08:42:52 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Close

jsonp1270715696732(["a", ["ab", "and", "a2", "ac", "are", "a a", "ad", "a b", 
"a1", "about"]])


Why haproxy responds to client with Connection: Close?

Thanks in advance!



Re: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-12 Thread Dmitry Sivachenko
On Mon, Jan 11, 2010 at 09:08:03PM +0100, Willy Tarreau wrote:
> 
> > I mean I want client connecting to haproxy NOT to use keep-alive,
> > but to utilize keep-alive between haproxy and backend servers.
> 
> Hmmm that's different. There are issues with the HTTP protocol
> itself making this extremely difficult. When you're keeping a
> connection alive in order to send a second request, you never
> know if the server will suddenly close or not. If it does, then
> the client must retransmit the request because only the client
> knows if it takes a risk to resend or not. An intermediate
> equipemnt is not allowed to do so because it might send two
> orders for one request.
> 
> The problem is, the clients are already aware of this and happily
> replay a request after the first one in case of unexpected session
> termination. But they never do this if the session terminates during
> the first request.
> 
> So by doing what you describe, your clients would regularly get some
> random server errors when a server closes a connection it does not
> want to sustain anymore before haproxy has a chance to detect it.
> 
> Another issue is that there are (still) some buggy applications which
> believe that all the requests from a same session were initiated by
> the same client. So such a feature must be used with extreme care.
> 

Imagine the following scenario: we have large number of requests from
different clients.  Each client send request rarely, so no need for keep-alive
between client and haproxy.

haproxy forwards requests to a number of backends, each request fetches 
rather small amount of data.  Now we have rather high packet rate on
proxy server, and maintaining keep alive between haproxy and backends
should reduce it greatly (eliminating connection setup/shutdown sequence,
all data fits into one-two data packets).

This is like (dynamic) pool of connections from haproxy to backends and
each request from client goes via one of already existing connection to
backend (if no connection available, then new is established).

I understand your arguments about edge cases with broken clients etc, but
this is what config file is for :) You can enable/disable features
depending on situation.

What do you think about it?



Re: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-11 Thread Dmitry Sivachenko
On Mon, Jan 04, 2010 at 12:13:49AM +0100, Willy Tarreau wrote:
> Hi all,
> 
> Yes that's it, it's not a joke !
> 
>  -- Keep-alive support is now functional on the client side. --
> 

Hello!

Are there any plans to implement server-side HTTP keep-alive?

I mean I want client connecting to haproxy NOT to use keep-alive,
but to utilize keep-alive between haproxy and backend servers.

Thanks!



Re: [PATCH] [MINOR] CSS & HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Tue, Oct 13, 2009 at 02:16:12PM +0200, Benedikt Fraunhofer wrote:
> Hello,
> 
> 2009/10/13 Dmitry Sivachenko :
> 
> > End tag for  is optional according to
> 
> really? Something new to me :)
> 

OMG, sorry, I am blind.

Forget about that.



Re: [PATCH] [MINOR] CSS & HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Mon, Oct 12, 2009 at 11:39:54PM +0200, Krzysztof Piotr Oledzki wrote:
> >From 6fc49b084ad0f4513c36418dfac1cf1046af66da Mon Sep 17 00:00:00 2001
> From: Krzysztof Piotr Oledzki 
> Date: Mon, 12 Oct 2009 23:09:08 +0200
> Subject: [MINOR] CSS & HTML fun
> 
> This patch makes stats page about 30% smaller and
> "CSS 2.1" + "HTML 4.01 Transitional" compliant.
> 
> There should be no visible differences.
> 
> Changes:
>  - add missing 

End tag for  is optional according to 
http://www.w3.org/TR/html401/struct/lists.html#edef-UL



Re: [PATCH 1/2] [MEDIUM] Collect & show information about last health check

2009-09-07 Thread Dmitry Sivachenko
On Sat, Sep 05, 2009 at 07:00:05PM +0200, Willy Tarreau wrote:
> However, I found that it was hard to understand the status codes in
> the HTML stats page. Some people are already complaining about columns
> they don't trivially understand, but here I think that status codes
> are close to cryptic. Also, while it is not *that* hard to tell
> which one means what when you can compare all of them, they must be
> unambiguous when found individually.
> 

Please consider "title" attribute, which may be used for most HTML elements.
Browser displays it as popup hint when you put mouse cursor over that element.

It consumes zero space on display, but can provide helpful
information when needed.

Example:




 
aaa
bbb




Put mouse cursor on "aaa" table cell and you will see "aaa title" hint.



Re: redispatch optimization

2009-08-31 Thread Dmitry Sivachenko
On Mon, Aug 31, 2009 at 03:39:35PM +0200, Krzysztof Oledzki wrote:
> > PS: another important suggestion is to make that delay tunable
> > parameter (like timeout.connect, etc), rather than hardcode
> > 1000ms in code.
> 
> Why would you like to change the value? I found 1s very well chosen.

In our environment we have some program asking balancer and expecting results
to be returned very fast (say, in 0.5 second maximum).

So I want to ask one server in the backend, and, if it is not responding, 
re-ask another one immediately (or even the same once again, assuming
that just first TCP SETUP packet was lost and server is
running normally).  So I use low connect.timeout (say, 30ms) and if
connection fails i retry the same one once more.

After all, we can use 1 second default and allow to customize that
value when needed.


> 
> 
> 
> > --- work/haproxy-1.4-dev2/src/session.c 2009-08-10 00:57:09.0 +0400
> > +++ /tmp/session.c  2009-08-31 14:28:26.0 +0400
> > @@ -306,7 +306,11 @@ int sess_update_st_cer(struct session *s
> >si->err_type = SI_ET_CONN_ERR;
> >
> >si->state = SI_ST_TAR;
> > +   if (s->srv && s->conn_retries == 0 && s->be->options & PR_O_REDISP) 
> > {
> > +   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
> > +   } else {
> >si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
> > +   }
> >return 0;
> >}
> >return 0;
> 
> 
> There is no value in adding 0ms, also SI_ST_TAR should be moved inside the 
> condition I think, not sure if it is enough.
> 

Okay probably it is ugly implementation (though it works), because I 
still dont completely understand the code.
Feel free to re-implement it in better way, just grab the idea.

Thanks.



redispatch optimization

2009-08-31 Thread Dmitry Sivachenko
Hello!

If we are running with 'option redispatch' and
'retries' parameter set to some positive value, 
the behaviur is as follows:

###
In order to avoid immediate reconnections to a server which is restarting,
  a turn-around timer of 1 second is applied before a retry occurs.

When "option redispatch" is set, the last retry may be performed on another
server even if a cookie references a different server.
###

While is makes sence to wait for some time (1 second)
before attempting another connection to the same
server, there is no reason to wait 1 second before
attempting the last connection to another server
(with option redispatch).  It is just a waste of one 
second.

Please consider the following patch to attempt last try 
immediately. (If our main server does not respond, there is
no reason to assume another one cant answer now).

PS: another important suggestion is to make that delay tunable
parameter (like timeout.connect, etc), rather than hardcode
1000ms in code.

Thanks in advance.


--- work/haproxy-1.4-dev2/src/session.c 2009-08-10 00:57:09.0 +0400
+++ /tmp/session.c  2009-08-31 14:28:26.0 +0400
@@ -306,7 +306,11 @@ int sess_update_st_cer(struct session *s
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
+   if (s->srv && s->conn_retries == 0 && s->be->options & PR_O_REDISP) {
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
+   } else {
si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   }
return 0;
}
return 0;



Re: Backend Server UP/Down Debugging?

2009-08-31 Thread Dmitry Sivachenko
On Sun, Aug 30, 2009 at 04:58:16PM +0200, Krzysztof Oledzki wrote:
> 
> 
> On Sun, 30 Aug 2009, Willy Tarreau wrote:
> 
> > On Sun, Aug 30, 2009 at 04:18:58PM +0200, Krzysztof Oledzki wrote:
> >>> I think you wanted to put HCHK_STATUS_L57OK here, not OKD since we're
> >>> in the 2xx/3xx state and not 404 disable. Or maybe I misunderstood the
> >>> OKD status ?
> >>
> >> OKD means we have Layer5-7 data avalible, like for example http code.
> >> Several times I found that some of my servers were misconfigured and were
> >> returning a 3xx code redirecting to a page-not-found webpage instead of
> >> doing a proper healt-check, so I think it is good to know what was the
> >> response, even if it was OK (2xx/3xx).
> >
> > Ah OK that makes sense now. It's a good idea to note that data is
> > available, for later when we want to capture it whole. Indeed, I'd
> > like to reuse the same capture principle as is used in proxies for
> > errors. It does not take *that* much space and is so much useful
> > already that we ought to implement it soon there too !
> 
> OK, I found where your confusion comes from - the diff was incomplete, 
> there was no include/types/checks.h file that explains how 
> HCHK_STATUS_L57OK differs from HCHK_STATUS_L57OKD and also makes it 
> possible to compile the code. :(
> 
> Dmitry, could you please use this patch instead? ;)
> 

Okay, thank you.



Re: TCP log format question

2009-08-28 Thread Dmitry Sivachenko
> On Thu, Aug 27, 2009 at 06:39:51AM +0200, Willy Tarreau wrote:
> Hmmm dontlog-normal only works in HTTP mode.
> 

BTW, the manual explicitly states that 'option dontlog-normal'
works in TCP mode. See section 8.2.2 "TCP Log format":
#
Successful connections will
not be logged if "option dontlog-normal" is specified in the frontend.
#





Re: Backend Server UP/Down Debugging?

2009-08-27 Thread Dmitry Sivachenko
On Thu, Aug 27, 2009 at 08:45:23AM +0200, Krzysztof Oledzki wrote:
> > On Wed, Aug 26, 2009 at 02:00:42PM -0700, Jonah Horowitz wrote:
> >> I???m watching my servers on the back end and occasionally they flap.  
> >> I???m wondering if there is a way to see why they are taken out of 
> >> service.  I???d like to see the actual response, or at least a HTTP status 
> >> code.
> >
> > right now it's not archived. I would like to keep a local copy of
> > the last request sent and response received which caused a state
> > change, but that's not implemented yet. I wanted to clean up the
> > stats socket first, but now I realize that we could keep at least
> > some info (eg: HTTP status, timeout, ...) in the server struct
> > itself and report it in the log. Nothing of that is performed right
> > now, so you may have to tcpdump at best :-(
> 
> As always, I have a patch for that, solving it nearly exactly like you 
> described it. ;) However for the last half year I have been rather silent, 
> mostly because it is very important time in my private life, so I think 
> I'm partially excused. ;) I know that there are some unfinished tasks (acl 
> for exapmple) so I'll try to push ASAP, maybe starting from the easier 
> patches, likt this ones. The rest will have to wait when I get back from 
> honeymoon.


I see flapping servers in my logs too and also have no clue why
haproxy disables them.

If you have a patch to log the reason why the particular server
was disabled, I'd love to test it (I run 1.4-dev2).

Thanks.



Re: TCP log format question

2009-08-27 Thread Dmitry Sivachenko
On Thu, Aug 27, 2009 at 06:39:51AM +0200, Willy Tarreau wrote:
> I'm seeing that you have both "tcplog" and "httplog". Since they
> both add a set of flags, the union of both is enabled which means
> httplog to me. I should add a check for this so that tcplog disables
> httplog.
> 
> > In my log file I see the following lines:
> > Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
> > [26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
> > 0/0/0/0/0 0/0 ""
> > 
> > 1) What does "" mean? I see no description of that field in
> > documentation of TCP log format.
> 
> this is because of "option httplog".

Aha, I see, i had an impression 'option httplog' will be 
ignored in TCP mode.

I removed it and "" disappeared from the log.


> 
> > 2) Why *all* requests are being logged? 
> > (note option dontlog-normal in default section).
> > How should I change configuration to log only important events
> > (errors) and do not log the fact connection was made and served?
> 
> Hmmm dontlog-normal only works in HTTP mode.

Ok, I see, though it is completely unclean after reading the manual.
This should probably be explicitly mentioned.

> Could you please
> explain what type of normal connections you would want to log
> and what type you would not want to log ? It could help making
> a choice of implementation of dontlog-normal for tcplog.
> 

I want to log exactly what manual states:
###
Setting this option ensures that
normal connections, those which experience no error, no timeout, no retry nor
redispatch, will not be logged.
###

... but for TCP mode proxy.

I mean I want to see in logs only those connection that were redispatched, 
timeouted, etc.

Thanks!



TCP log format question

2009-08-26 Thread Dmitry Sivachenko
Hello!

I am running haproxy-1.4-dev2 with the following
configuration (excerpt):

global
log /var/run/loglocal0
user www
group www
daemon
defaults
log global
modetcp
balance roundrobin
maxconn 2000
option abortonclose
option allbackups
option httplog
option dontlog-normal
option dontlognull
option redispatch
option tcplog
retries 2

frontend M-front
bind 0.0.0.0:17306
mode tcp
acl M-acl nbsrv(M-native) ge 5
use_backend M-native if M-acl
default_backend M-foreign

backend M-native
mode tcp
server ms1 ms1:17306 check maxconn 100 maxqueue 1 weight 100
server ms2 ms2:17306 check maxconn 100 maxqueue 1 weight 100
<...>

backend M-foreign
mode tcp
server ms3 ms3:17306 check maxconn 100 maxqueue 1 weight 100
server ms4 ms4:17306 check maxconn 100 maxqueue 1 weight 100

Note that both frontend and 2 backends are running in TCP mode.

In my log file I see the following lines:
Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
[26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
0/0/0/0/0 0/0 ""

1) What does "" mean? I see no description of that field in
documentation of TCP log format.
2) Why *all* requests are being logged? 
(note option dontlog-normal in default section).
How should I change configuration to log only important events
(errors) and do not log the fact connection was made and served?

Thanks in advance!



Compilation of haproxy-1.4-dev2 on FreeBSD

2009-08-24 Thread Dmitry Sivachenko
Hello!

Please consider the following patches. They are required to
compile haproxy-1.4-dev2 on FreeBSD.

Summary:
1) include  before 
2) Use IPPROTO_TCP instead of SOL_TCP
(they are both defined as 6, TCP protocol number)

Thanks!


--- src/backend.c.orig  2009-08-24 14:49:04.0 +0400
+++ src/backend.c   2009-08-24 14:49:19.0 +0400
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 

--- src/stream_sock.c.orig  2009-08-24 14:45:15.0 +0400
+++ src/stream_sock.c   2009-08-24 14:46:19.0 +0400
@@ -16,12 +16,12 @@
 #include 
 #include 
 
-#include 
-
 #include 
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 


--- src/proto_tcp.c.orig2009-08-24 14:50:03.0 +0400
+++ src/proto_tcp.c 2009-08-24 14:55:45.0 +0400
@@ -18,14 +18,14 @@
 #include 
 #include 
 
-#include 
-
 #include 
 #include 
 #include 
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 
@@ -253,7 +253,7 @@ int tcp_bind_listener(struct listener *l
 #endif
 #ifdef TCP_MAXSEG
if (listener->maxseg) {
-   if (setsockopt(fd, SOL_TCP, TCP_MAXSEG,
+   if (setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG,
   &listener->maxseg, sizeof(listener->maxseg)) == 
-1) {
msg = "cannot set MSS";
err |= ERR_WARN;




Re: Question about TCP balancing

2009-08-06 Thread Dmitry Sivachenko
On Thu, Aug 06, 2009 at 12:03:25AM +0200, Willy Tarreau wrote:
> On Wed, Aug 05, 2009 at 12:01:34PM +0400, Dmitry Sivachenko wrote:
> > On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
> > > frontend my_front
> > >   acl near_usable nbsrv(near) ge 2
> > >   acl far_usable  nbsrv(far)  ge 2
> > >   use_backend near if near_usable
> > >   use_backend far  if far_usable
> > >   # otherwise error
> > > 
> > > backend near
> > >   balance roundrobin
> > >   server near1 1.1.1.1 check
> > >   server near2 1.1.1.2 check
> > >   server near3 1.1.1.3 check
> > > 
> > > backend far
> > >   balance roundrobin
> > >   server far1  2.1.1.1 check
> > >   server far2  2.1.1.2 check
> > >   server far3  2.1.1.3 check
> > > 
> > 
> > Aha, I already came to such a solution and noticed it works only
> > in HTTP mode.
> > Since I actually do not want to parse HTTP-specific information,
> > I want to stay in TCP mode (but still use ACL with nbsrv).
> > 
> > So I should stick with 1.4 for that purpose, right?
> 
> exactly. However, keep in mind that 1.4 is development, and if
> you upgrade frequently, it may break some day. So you must be
> careful.
> 

Okay, what is the estimated release date of 1.4 branch?



Re: Question about TCP balancing

2009-08-05 Thread Dmitry Sivachenko
On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
> frontend my_front
>   acl near_usable nbsrv(near) ge 2
>   acl far_usable  nbsrv(far)  ge 2
>   use_backend near if near_usable
>   use_backend far  if far_usable
>   # otherwise error
> 
> backend near
>   balance roundrobin
>   server near1 1.1.1.1 check
>   server near2 1.1.1.2 check
>   server near3 1.1.1.3 check
> 
> backend far
>   balance roundrobin
>   server far1  2.1.1.1 check
>   server far2  2.1.1.2 check
>   server far3  2.1.1.3 check
> 

Aha, I already came to such a solution and noticed it works only
in HTTP mode.
Since I actually do not want to parse HTTP-specific information,
I want to stay in TCP mode (but still use ACL with nbsrv).

So I should stick with 1.4 for that purpose, right?

Or does HTTP mode acts like TCP mode unless I actually use
something HTTP-specific?
In other words, will the above configuration (used in HTTP mode)
actually try to parse HTTP headers (and waste cpu cycles for that)?

Thanks.




Re: Question about TCP balancing

2009-08-04 Thread Dmitry Sivachenko
Hello!

Thanks for clarification.

I have another question then (trying to solve my problem in a different way).

I want to setup the following configuration.
I have 2 sets of servers (backends): let call one set NEAR (n1, n2, n3)
and another set FAR (f1, f2, f3).

I want to spread incoming requests between NEAR servers only
when they are alive, and move load to FAR servers in case NEAR set is down.

Is it possible to setup such configuration?

I read the manual but did not find such a solution...

Thanks in advance!


On Mon, Aug 03, 2009 at 09:46:47PM +0200, Willy Tarreau wrote:
> No it's not, and it's not only a configuration issue, it's an OS
> limitation. The only way to achieve this is to stop listening to
> the port then listen again to re-enable the port. On some OSes, it
> is possible. On other ones, you have to rebind (and sometimes close
> then recreate a new socket). But once your process has dropped
> privileges, you can't always rebind if the port is <1024 for
> instance.
> 
> So instead of having various behaviours for various OSes, it's
> better to make them behave similarly.
> 
> I have already thought about adding an OS-specific option to do
> that, but I have another problem with that. Imagine that your
> servers are down. You stop listening to the port. At the same time,
> someone else starts listening (eg: you start a new haproxy without
> checking the first one, or an FTP transfer uses this port, ...).
> What should be done when the servers are up again ? Haproxy will
> not be able to get its port back because someone else owns it.
> 
> So, by lack of a clean and robust solution, I prefer not to
> experiment in this area.
> 



Question about TCP balancing

2009-08-03 Thread Dmitry Sivachenko
Hello!

I am trying to setup haproxy 1.3.19 to use it as
TCP load balancer.

Relevant portion of config looks like:

listen  test 0.0.0.0:17000
mode tcp
balance roundrobin
server  srv1 srv1:17100 check inter 2
server  srv2 srv2:17100 check inter 2
server  srv3 srv3:17100 check inter 2

Now imagine the situation that all 3 backends are down
(no program listen on 17100 port, OS responds with Connection Refused).

In that situation haproxy still listens port 17100 and closes connection
immediately:
> telnet localhost 17101
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.

Is it possible to configure haproxy so it will stop listening the port
when all backends are down?  So clients will receive
Connection Refused as if none listens TCP port at all?

Thanks in advance!



<    1   2