Re: How do you tell if a url has a path

2014-07-08 Thread Baptiste
On Tue, Jul 8, 2014 at 6:55 AM, Jeffrey Scott Flesher Gmail
 wrote:
> I want to check the URL to see if any path is passed,
> http://domain.tdl
> or
> http://domain.tdl/
> as such, both of these are considered not to have a path,
> my problem is that I only want to rewrite the path,
> if either of the two are true, meaning it has no path,
> this fails:
> acl has_path_uri path_beg -i /
> If the url has no path I want to add a ww to it as such:
> http://domain.tdl/ww
> so that my wthttp app will work,
> but if I use
> acl has_ww_uri path_beg -i /ww
> reqirep ^([^\ :]*)\ /(.*) \1\ /ww/\2 if !has_ww_uri
> it rewrites every url that does not have ww in it,
> which is not what I want, because it rewrites resources like css and images,
> so how do I determine if the url has no path?
>
> Thanks for any help.
>


Hi Jeff,

Simply use the following acl:
  acl shortest_path path /

Baptiste



Re: Difference between "Disable" and "soft stop"

2014-07-08 Thread Baptiste
On Mon, Jul 7, 2014 at 10:12 PM, Pavlos Parissis
 wrote:
> On 07/07/2014 11:49 πμ, David wrote:
>> Hello,
>>
>> I have installed HAproxy 1.5 in my RDS farm. But when i check the "disable"
>> option for one server, this server is still active in my farm and users can
>> connect to it ?
>>
>
> I assume you mean that it took while for the server to stop receiving
> after it was disabled, am I right ?
>
> I have observed this only when I used TCP mode, in my case it took some
> time(20mins) for a server to stop getting traffic. I switched(for other
> reasons) to HTTP mode with keep-alive enabled and this particular
> behavior doesn't occur. Have you tried to enable 'option forceclose'? I
> have no clue if it will do the trick.
>
>
>> May i have to use "soft stop" instead ? What is the difference between these
>> two options ?
>>
>> Thank you by advance for your answer.
>>
>> David.
>>
>>
>>
>>
>
>

Hi David,

do you mean that your server is still getting new connections or only
active ones are maintained?
How do you disable the server: through the socket or through the
configuration file + reload ?

RDS is long lived TCP connections.
Disbaling a server in the conf file will disable it in the new process
which will handle the new connections.
The old process remains active and maintain established TCP connections.

Baptiste



Re: Need help with url rewrite

2014-07-08 Thread Baptiste
On Fri, Jul 4, 2014 at 8:42 PM, Jeffrey Scott Flesher Gmail
 wrote:
> If a Picture is worth a 1000 Words:
> If the url does not have any path like this:
> http://mad-news.net/
>
> acl has_ww_uri path_beg -i /ww
> returns false
>
> reqirep ^([^\ :]*)\ /(.*) \1\ /ww/\2 if !has_ww_uri
> http://mad-news.net/ww/en/
> it adds the ww, the program with is wthttpd (Wt) defaults to en for language
> control
> Just to show you how the site looks at port 8060:
> http://mad-news.net:8060/ww/en/
> If I comment the code, the site looks fine.
>
> Note: I want only the first path to work:
> http://mad-news.net/this/ww  fails to work for the rule, it does this:
> http://mad-news.net/ww/this/this/ww
> which is not what I want, so how do I write a rule to cover this?
>
> Note: If the ww is not there, the Wt app will ignore the request, results in
> 404 http://wittywizard.org/ vs http://wittywizard.org/ww.
> There is no way around this behavior is I want to have a pretty URL.
>
> My whole config, Note that it works the same in 1.4 and 1.5, but this is:
> HA-Proxy version 1.5.1 2014/06/24:
>
> global
>
> log 127.0.0.1 local0
> log 127.0.0.1 local1 notice
> maxconn 4096
> user haproxy
> group haproxy
> daemon
> # pidfile /var/run/haproxy.pid
> # stats socket /var/run/haproxy.stat mode 600
> # stats socket /tmp/haproxy
>
> defaults
> log global
> modehttp
> option  httplog
> option  dontlognull
> retries 3
> option  redispatch
> maxconn1000
> #contimeout 5000 # haproxy 1.4
> timeout connect 5000
> #clitimeout 5 # haproxy 1.4
> timeout client 5
> #srvtimeout 5 # haproxy 1.4
> timeout server 5
>
> frontend wt
> bind 216.224.185.71:80
> # Set inside Witty Wizard main.cpp
> acl has_ww_uri path_beg -i /ww
> reqirep ^([^\ :]*)\ /(.*) \1\ /ww/\2 if !has_ww_uri
> redirect prefix http://wittywizard.org code 301 if { hdr(host) -i
> www.wittywizard.org }
> # Note: see wthttpd.sh session-id-prefix
> acl srv1 url_sub wtd=wt-8060
> acl srv1_up nbsrv(bck1) gt 0
> use_backend bck1 if srv1_up srv1
> # Second Thread
> # Note: see wthttpd.sh session-id-prefix
> # acl srv2  url_sub wtd=wt-8061
> # acl srv2_up nbsrv(bck2) gt 0
> # use_backend bck2 if srv2_up srv2 has_ww_uri
> #
> default_backend bck_lb
> #
> backend bck_lb
> balance roundrobin
> #server srv1 108.59.251.28:8060 track bck1/srv1
> server srv1 216.224.185.71:8060 track bck1/srv1
>
> backend bck1
> balance roundrobin
> #server srv1 108.59.251.28:8060 check
> server srv1 216.224.185.71:8060 check
>
> backend bck2
> balance roundrobin
> #server srv2 108.59.251.28:8061 check
> server srv2 216.224.185.71:8060 check
>
> As you can see, the path seems to have changed, not sure what is going on,
> any ideas?
>
> Thanks
>
> On Thu, 2014-07-03 at 22:40 +0200, Baptiste wrote:
>
> On Thu, Jul 3, 2014 at 9:38 PM, Jeffrey Scott Flesher Gmail
>  wrote:
>> I have a url that always begins with ww, ie http://domain.tdl/ww/en/..., I
>> want to rewrite the url to include the ww,
>> I tried the below, it works, but changes the path or something,
>> because it cause the resources like css and images to not appear (404),
>> does anyone know how to fix this or do this the right way?
>>
>> acl has_ww_uri path_beg -i /ww
>> reqirep ^([^\ :]*)\ /(.*) \1\ /ww/\2 if !has_ww_uri
>>
>
> Hi Jeffrey,
>
> Can you clarify a bit your question, cause you're confusing me.
> please send us an example of what you get in HAProxy and how you want
> it out after HAProxy has rewritten it.
>
> Baptiste



Jeffrey,

I'm sorry, I can't answer you, because I can't understand what you
mean in your emails.

Baptiste



Re: Using the socket interface to access ACLs

2014-07-08 Thread Baptiste
On Mon, Jul 7, 2014 at 8:31 PM, William Jimenez
 wrote:
>
>
>
> On Thu, Jul 3, 2014 at 5:59 AM, Baptiste  wrote:
>>
>> On Thu, Jul 3, 2014 at 2:24 PM, Thierry FOURNIER 
>> wrote:
>> > On Tue, 1 Jul 2014 23:00:13 +0200
>> > Baptiste  wrote:
>> >
>> >> On Tue, Jul 1, 2014 at 10:54 PM, William Jimenez
>> >>  wrote:
>> >> > Hello
>> >> > I am trying to modify ACLs via the socket interface. When I try to do
>> >> > something like 'get acl', I get an error:
>> >> >
>> >> > Missing ACL identifier and/or key.
>> >> >
>> >> > How do I find the ACL identifier or key for a specific ACL? I see the
>> >> > list
>> >> > of ACLs when i do a 'show acl', but unsure which of these values is
>> >> > the file
>> >> > or key:
>> >> >
>> >> > # id (file) description
>> >> > 0 () acl 'always_true' file '/etc/haproxy/haproxy.cfg' line 19
>> >> > 1 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 20
>> >> > 2 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 21
>> >> > 3 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 22
>> >> >
>> >> > Thanks
>> >>
>> >> Hi William,
>> >>
>> >> In order to be able to update ACL content, they must load their
>> >> content from a file.
>> >> The file name will be considered as a 'reference' you can point to
>> >> when updating content.
>> >> Don't forget to update simultaneously the content from an ACL and from
>> >> the flat file to make HAProxy reload reliable :)
>> >>
>> >> Baptiste
>> >>
>> >
>> > Hi
>> >
>> > You can modify ACL without file. The identifier is the number prefixed
>> > by the char '#', like this:
>> >
>> >add acl #1 127.0.0.1
>> >
>> > get acl is used to debug acl.
>> >
>> > Thierry
>> >
>> >
>>
>> Yes, but acl number is not reliable, since it can change in time.
>> Furthermore, it's easier to update content of a flat file than
>> updating ACL values in HAproxy's configuration.
>>
>> Baptiste
>
>
> Here is my config for reference:
>
>> global
>>   daemon
>>   maxconn 4096
>>   chroot /var/lib/haproxy
>>   pidfile /var/run/haproxy.pid
>>   uid 99
>>   gid 99
>>   stats socket /var/lib/haproxy/stats level admin
>> defaults
>>   mode http
>>   timeout connect 5000ms
>>   timeout client 5ms
>>   timeout server 5ms
>> frontend 01-fend-in
>>   bind localhost:80
>>   default_backend 01_bend
>>   acl myacl hdr(Host) -f /root/myacl
>>   #acl redir_true always_false
>>   redirect code 307 location http://example.com if redir_true
>> backend ffd_bend
>>   option httpchk GET /
>>   option http-server-close
>>   server bend013 localhost:8180 check
>>   server bend012 localhost:8180 check
>
>
> Thanks


Hello,

We also need the content of /root/myacl.
Also, your redir_true acl is commented despite being used, so this
configuration is broken.
Could you provide us one you tested and did not deliver you the
expected behavior?

Baptiste



Re: Detecting if the the client connected using SSL

2014-07-08 Thread Baptiste
On Mon, Jul 7, 2014 at 12:17 PM, Dennis Jacobfeuerborn
 wrote:
> On 07.07.2014 08:57, Baptiste wrote:
>> On Mon, Jul 7, 2014 at 3:48 AM, Dennis Jacobfeuerborn
>>  wrote:
>>> Hi,
>>> I'm experimenting with the SSL capabilities of haproxy and I'm wondering
>>> if there is a way to detect if the client connected using SSL?
>>>
>>> The background is that I have two frontends one for SSL and one for
>>> regular http. In the SSL frontend I forward the requests to the http
>>> frontend via send-proxy. This part works well.
>>> The problem I have happens when I want to redirect non-SSL requests to SSL.
>>> The common way seems to be to put this in the http frontend:
>>> redirect scheme https if !{ ssl_fc }
>>>
>>> However since ALL requests arriving there are regular http requests
>>> (either received via port 80 or accept-proxy) this obviously ends in a
>>> redirect loop since ssl_fc only checks if the request received by the
>>> current frontend is a SSL one and not if the original request is.
>>>
>>> What seems to work is this:
>>> redirect scheme https if { dst_port eq 80 }
>>>
>>> This works around the problem but now I have to make sure that the port
>>> I check here matches the port in the bind statement.
>>> A cleaner way would be if I could check if the original request is a SSL
>>> one or not. Is this possible somehow?
>>>
>>> Regards,
>>>   Dennis
>>>
>>
>>
>> Hi Dennis,
>>
>> You should not point your SSL frontend to your clear one.
>> Just use the clear one with a simple redirect rule to SSL one and make
>> the SSL one point to your backend.
>> And you're done.
>
> This makes sense but what I forgot to mention is that I use a
> configuration trick posted here a while ago where I bind SSL frontend to
> several cores to do the SSL offloading and then proxy the requests to
> the http frontend which is bound to a single core to do the
> load-balancing/ha/stats. If I remember correctly then doing the actual
> handling of that stuff on multiple cores is not recommended.
> This is the frontend config I use currently:
>
> listen front-https
> bind-process 2-4
> bind 10.99.0.200:443 ssl crt /etc/pki/tls/certs/testcert.chain.pem
> ciphers
> ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM
> no-sslv3
> reqadd X-Forwarded-Proto:\ https
> server clear abns@ssl-proxy send-proxy
>
> frontend front1
> bind-process 1
> bind 10.99.0.200:80
> bind abns@ssl-proxy accept-proxy
> redirect scheme https if { dst_port eq 80 }
> default_backend back1
>
> Regards.
>   Dennis
>

Hi Dennis,

The answer stands in your frontend definition.
You properly set a X-Forwarded-Proto headers.
So to take your redirect decision, just look for it!

Note that I moved your configuration to the new http-request rules.

listen front-https
 [...]
 http-request set-header X-Forwarded-Proto https
 server clear abns@ssl-proxy send-proxy

frontend front1
[...]
http-request redirect scheme https if { req.hdr(X-Forwarded-Proto) https }


Baptiste



Re: Stick Table on Websocket - No stats data in table

2014-07-08 Thread Baptiste
On Fri, Jul 4, 2014 at 6:56 AM, Jai Gupta  wrote:
> We are using stick table with Websocket. Although haproxy stats page shows
> correct session rate, current session info but all counters are zero in
> stick table.
>
> backend websocket
> balance leastconn
> stick-table type string len 12 size 32m expire 7d peers mypeers store
> server_id,conn_cnt,conn_cur,sess_cnt,http_req_cnt,bytes_in_cnt,bytes_out_cnt
> stick on hdr(host)
> default-server on-marked-down shutdown-sessions
> ## websocket protocol validation
> acl hdr_connection_upgrade hdr(Connection) -i upgrade
> acl hdr_upgrade_websocket  hdr(Upgrade)-i websocket
> acl hdr_websocket_key  hdr_cnt(Sec-WebSocket-Key)  eq 1
> acl hdr_websocket_version  hdr_cnt(Sec-WebSocket-Version)  eq 1
> http-request deny if ! hdr_connection_upgrade ! hdr_upgrade_websocket !
> hdr_websocket_key ! hdr_websocket_version
> ## websocket health checking
> option httpchk GET / HTTP/1.1\r\nHost: abc.com\r\nSec-WebSocket-Version:
> 13\r\nSec-WebSocket-Key: haproxytest6Lwghaproxyhh\r\nConnection:
> Upgrade\r\nUpgrade: websocket http-check expect status 101
> ## Servers
> server  one   x.x.x.x:y check
> server  two   x.x.x.x:y check
> ...
> ...
>
> Stick Table
> # table: websocket, type: string, size:33554432, used:2
> 0x1363374: key=159256323654 use=0 exp=604357344 server_id=2 conn_cnt=0
> conn_cur=0 sess_cnt=0 http_req_cnt=0 bytes_in_cnt=0 bytes_out_cnt=0
> 0x137eeb4: key=215334743731 use=0 exp=604523738 server_id=3 conn_cnt=0
> conn_cur=0 sess_cnt=0 http_req_cnt=0 bytes_in_cnt=0 bytes_out_cnt=0
>
> Jai


Hi Jai,

You need to enabled track through the track-sc directives.
Read the man, there are some examples about it.

Baptiste



Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-08 Thread Willy Tarreau
Hi James,

On Mon, Jul 07, 2014 at 09:36:36PM -0400, James Westby wrote:
> This time with improved formatting, and "pxname" fixed.

I just found that the patch did not apply, it was made against an older
version it seems. Anyway, I could reapply your changes by hand. I tried
to be careful, please double-check the attached patch.

That's purely a matter of taste, but I find that the long names "FRONTEND"
etc in brackets are ... long, and not easy to find.

I think we should attempt something like this :

  Letters in brackets indicate what entities the stats apply to :
  - L: Listeners
  - F: Frontends
  - B: Backends
  - S: Servers

0. pxname [FBS] : proxy name
1. svname [FBS] : service name (FRONTEND for frontend, BACKEND for backend, 
any
   name for server)
2. qcur [BS] : current queued requests. For the backend this reports the 
number
   queued without a server assigned.

etc...

Or we could always write them in the same order and replace the letter with
a dot, that might make the thing even more readable :

0. pxname [.FBS] : proxy name
1. svname [.FBS] : service name (FRONTEND for frontend, BACKEND for 
backend, any
   name for server)
2. qcur [..BS] : current queued requests. For the backend this reports the 
number
   queued without a server assigned.

Having them before the field name would improve alignment but perhaps not
ease of reading, I don't know :

0. [.FBS] pxname : proxy name
1. [.FBS] svname : service name (FRONTEND for frontend, BACKEND for 
backend, any
   name for server)
2. [..BS] qcur : current queued requests. For the backend this reports the 
number
 queued without a server assigned.

We could even try to align the colon, but I fear that some field names are quite
large (though this could be a hint for not inventing even longer ones).

So please suggest, check, adapt, etc...

Thanks,
Willy

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 1d7bc7a..d2a53e2 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13136,7 +13136,9 @@ Unix socket.
 ---
 
 The statistics may be consulted either from the unix socket or from the HTTP
-page. Both means provide a CSV format whose fields follow. The first line
+page. Both means provide a CSV format whose fields follow. In brackets after
+each field are the types for which the field may take a value. Types not
+in that list will always have a blank value for that field. The first line
 begins with a sharp ('#') and has one word per comma-delimited field which
 represents the title of the column. All other lines starting at the second one
 use a classical CSV format using a comma as the delimiter, and the double quote
@@ -13146,43 +13148,85 @@ text is doubled ('""'), which is the format that most 
tools recognize. Please
 do not insert any column before these ones in order not to break tools which
 use hard-coded column positions.
 
-  0. pxname: proxy name
-  1. svname: service name (FRONTEND for frontend, BACKEND for backend, any name
-for server)
-  2. qcur: current queued requests
-  3. qmax: max queued requests
-  4. scur: current sessions
-  5. smax: max sessions
-  6. slim: sessions limit
-  7. stot: total sessions
-  8. bin: bytes in
-  9. bout: bytes out
- 10. dreq: denied requests
- 11. dresp: denied responses
- 12. ereq: request errors
- 13. econ: connection errors
- 14. eresp: response errors (among which srv_abrt)
- 15. wretr: retries (warning)
- 16. wredis: redispatches (warning)
+  0. pxname: proxy name [FRONTEND, BACKEND, SERVER]
+  1. svname: service name (FRONTEND for frontend, BACKEND for backend, any
+ name for server) [FRONTEND, BACKEND, SERVER]
+  2. qcur: current queued requests. For the backend this reports the number
+ queued without a server assigned. [BACKEND, SERVER]
+  3. qmax: max value of qcur [BACKEND, SERVER]
+  4. scur: current sessions [FRONTEND, BACKEND, SERVER]
+  5. smax: max sessions [FRONTEND, BACKEND, SERVER]
+  6. slim: configured session limit [FRONTEND, BACKEND, SERVER]
+  7. stot: cumulative number of connections [FRONTEND, BACKEND, SERVER]
+  8. bin: bytes in [FRONTED, BACKEND, SERVER]
+  9. bout: bytes out [FRONTEND, BACKEND, SERVER]
+ 10. dreq: requests denied because of security concerns.
+ - For tcp this is because of a matched tcp-request content rule.
+ - For http this is because of a matched http-request or tarpit rule.
+ [FRONTEND, BACKEND]
+ 11. dresp: responses denied because of security concerns.
+ - For http this is because of a matched http-request rule, or
+   "option checkcache".
+ [FRONTEND, BACKEND, SERVER]
+ 12. ereq: request errors. Some of the possible causes are:
+ - early termination from the client, before the request has been sent.
+ - read error from the client
+ - client timeout
+ - client closed connection
+ - various bad requests from the client.
+ - request was tar

Re: Abstract namespace sockets handling

2014-07-08 Thread hodor

On 2014-07-08, 01:24:18, Willy Tarreau wrote:
> > But that would kill the possibility of two processes inside different
> > chroots to communicate efficiently (without some mount --bind tricks).
> > 
> > (I don't have any practical example of such a setup, though :))
> 
> Yes that's the same conclusion I came across this week-end, so I ended up
> thinking that we should have an "internal" scheme in addition to "abns".
> The internal one would generate a random or pseudo-random ID that never
> fails to bind and that never needs to hide during restarts.

Sounds good.

> For now I've handled the pause()/resume() gracefully for abstract sockets,
> they're totally unbound/rebound. The only drawback is that if you're running
> with a socket bound to multiple processes, and a new process wants to replace
> the old one, then fails and signals its failure, in this precise case, only
> one of the initial processes will be able to rebind to the socket. I feel
> like it's a very small issue that only needed to be documented and does not
> even merit a warning in the configuration checker.
> 
> So I've pushed that and backported into 1.5 as well before everyone starts
> to get upset by this nasty behaviour.

Just tried it and seems to work great :). It even recovers the abstract
sockets smoothly if the new instance fails to start.


The problem with plain unix socket being unreachable (unlinked) when the
new instance fails to start is still there, though. This can happen when
we try to bind() on a new tcp port which is occupied by something else
than haproxy.

Perhaps this could be solved by delaying the rename(tempname, path) and
unlink(backname) after all else is done? Something like .bind_finish()
and .bind_rollback() in struct protocol, where .bind_finish() would be
for "all is okay" and .bind_rollback() for "something else failed,
return the socket to the old haproxy instance"? Those functions could be
called after we are reasonably sure nothing else can fail.


Thanks,

-- 
hodor



Re: Abstract namespace sockets handling

2014-07-08 Thread Willy Tarreau
Hi,

On Tue, Jul 08, 2014 at 02:57:53PM +0200, hodor wrote:
> > So I've pushed that and backported into 1.5 as well before everyone starts
> > to get upset by this nasty behaviour.
> 
> Just tried it and seems to work great :). It even recovers the abstract
> sockets smoothly if the new instance fails to start.

Yes that was the purpose :-)

> The problem with plain unix socket being unreachable (unlinked) when the
> new instance fails to start is still there, though. This can happen when
> we try to bind() on a new tcp port which is occupied by something else
> than haproxy.

Not exactly, I know what's happening, you have a frontend which had both
a unix socket and an abstract socket. When resuming, the abstract socket
failed and the proxy was marked in error so polling was not re-enabled on
its listeners. I still have to see how far we can go to change that. It's
very tricky as we don't want to leave a process in a bad state which will
never stop for example. Initially when the soft restart was implemented,
we were not supposed to have multiple processes listening :-)

The pause/resume operations for unix sockets are different than those
for other protocols because a file system access is needed, so they're
performed by the new process.

> Perhaps this could be solved by delaying the rename(tempname, path) and
> unlink(backname) after all else is done? Something like .bind_finish()
> and .bind_rollback() in struct protocol, where .bind_finish() would be
> for "all is okay" and .bind_rollback() for "something else failed,
> return the socket to the old haproxy instance"? Those functions could be
> called after we are reasonably sure nothing else can fail.

All that is properly done. Check your config to ensure you're not in
the case above, or alternately, comment out the "fail = 1" statement
at liine 841 in proxy.c and you will see this annoying behaviour go
by itself.

Willy




[PATCH] DOC: expand the docs for the provided stats.

2014-07-08 Thread James Westby
Indicate for each statistic which types may have a value for
that statistic.

Explain some of the provided statistics a little more deeply.
---
 doc/configuration.txt | 163 +++---
 1 file changed, 100 insertions(+), 63 deletions(-)

Hi,

I like the short names after the field names, so I've implemented
that. I also rebased on top of master (turns out the github mirror was
outdated), and included the listener information.

Also, I'm excited for the point when I'll be able to use the new *time
stats, they will be very nice to have.

Thanks,

James

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 1d7bc7a..e07dea9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13146,44 +13146,76 @@ text is doubled ('""'), which is the format that most 
tools recognize. Please
 do not insert any column before these ones in order not to break tools which
 use hard-coded column positions.
 
-  0. pxname: proxy name
-  1. svname: service name (FRONTEND for frontend, BACKEND for backend, any name
-for server)
-  2. qcur: current queued requests
-  3. qmax: max queued requests
-  4. scur: current sessions
-  5. smax: max sessions
-  6. slim: sessions limit
-  7. stot: total sessions
-  8. bin: bytes in
-  9. bout: bytes out
- 10. dreq: denied requests
- 11. dresp: denied responses
- 12. ereq: request errors
- 13. econ: connection errors
- 14. eresp: response errors (among which srv_abrt)
- 15. wretr: retries (warning)
- 16. wredis: redispatches (warning)
- 17. status: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
- 18. weight: server weight (server), total weight (backend)
- 19. act: server is active (server), number of active servers (backend)
- 20. bck: server is backup (server), number of backup servers (backend)
- 21. chkfail: number of failed checks
- 22. chkdown: number of UP->DOWN transitions
- 23. lastchg: last status change (in seconds)
- 24. downtime: total downtime (in seconds)
- 25. qlimit: queue limit
- 26. pid: process id (0 for first instance, 1 for second, ...)
- 27. iid: unique proxy id
- 28. sid: service id (unique inside a proxy)
- 29. throttle: warm up status
- 30. lbtot: total number of times a server was selected
- 31. tracked: id of proxy/server if tracking is enabled
- 32. type (0=frontend, 1=backend, 2=server, 3=socket)
- 33. rate: number of sessions per second over last elapsed second
- 34. rate_lim: limit on new sessions per second
- 35. rate_max: max number of new sessions per second
- 36. check_status: status of last health check, one of:
+In brackets after each field name are they types which may have a value for
+that field. The types are L (Listeners), F (Frontends), B (Backends), and
+S (Servers).
+
+  0. pxname [LFBS]: proxy name
+  1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend,
+ any name for server/listener)
+  2. qcur [..BS]: current queued requests. For the backend this reports the
+ number queued without a server assigned.
+  3. qmax [..BS]: max value of qcur
+  4. scur [LFBS]: current sessions
+  5. smax [LFBS]: max sessions
+  6. slim [LFBS]: configured session limit
+  7. stot [LFBS]: cumulative number of connections
+  8. bin [LFBS]: bytes in
+  9. bout [LFBS]: bytes out
+ 10. dreq [LFB.]: requests denied because of security concerns.
+ - For tcp this is because of a matched tcp-request content rule.
+ - For http this is because of a matched http-request or tarpit rule.
+ 11. dresp [LFBS]: responses denied because of security concerns.
+ - For http this is because of a matched http-request rule, or
+   "option checkcache".
+ 12. ereq [LF..]: request errors. Some of the possible causes are:
+ - early termination from the client, before the request has been sent.
+ - read error from the client
+ - client timeout
+ - client closed connection
+ - various bad requests from the client.
+ - request was tarpitted.
+ 13. econ [..BS]: number of requests that encountered an error trying to
+ connect to a backend server. The backend stat is the sum of the stat
+ for all servers of that backend, plus any connection errors not
+ associated with a particular server (such as the backend having no
+ active servers).
+ 14. eresp [..BS]: response errors. srv_abrt will be counted here also.
+ Some other errors are:
+ - write error on the client socket (won't be counted for the server stat)
+ - failure applying filters to the response.
+ 15. wretr [..BS]: number of times a connection to a server was retried.
+ 16. wredis [..BS]: number of times a request was redispatched to another
+ server. The server value counts the number of times that server was
+ switched away from.
+ 17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
+ 18. weight [..BS]: server weight (server), total weight (backend)
+ 19. act [..BS]: server is active (server), number of active servers (backend)
+ 20. bck [..BS]: server is backup (server

LB EDIFACT files

2014-07-08 Thread Andrey Zakabluk
Hi!I use haproxy for balancing http request - this is better LB)! And now I tru 
ballancing EDIFACT files. 1st step I try just forwarding  request between 
server GMD and server COMARCH via Haproxy. 
--Plan(How this work?)

server GMD send EDIFACT file to --> haproxy:7446, haproxy send --> to server 
COMARCH:7446(processed EDIFACT file) and send response to HAPROXY--> 
Haproxy:3038, haproxy send to --> GMD:3038
-
--version Haproxy
HA-Proxy version 1.5.1 2014/06/24
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = 

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built without OpenSSL support (USE_OPENSSL not set)
Built without PCRE support (using libc's regex instead)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
-- Config haproxy

global
#daemon
maxconn 10240
debug
log 127.0.0.1 local1 debug
stats socket  /opt/haproxy/gmd_bscs_prod/socket_s1cm1/haproxy.sock mode 06
00 level admin
log-tag haproxy
#nbproc 4
defaults
mode tcp
log global
option tcplog
timeout connect 50s
timeout client 30s
timeout server 30s
  
frontend gmdcom1
bind 10.254.13.100:7446
default_backend comarch
mode tcp
log global
option tcplog

#

frontend gmdrrs1
bind 10.254.13.100:3038
default_backend gmdrrs
mode tcp
log global
option tcplog


backend comarch
balance roundrobin
mode tcp
server mdsp_7446 10.254.13.40:7446 check port 7446

backend gmdrrs
balance roundrobin
mode tcp
server rrs_3038 s1cm1t:3038 check port 3038



-- Debug log Haproxy 
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
.
 0 sessions requeued, 0 total in queue.
:gmdcom1.accept(0005)=0008 from [10.254.62.75:60287]
:comarch.srvcls[0008:0009]
:comarch.clicls[0008:0009]
:comarch.closed[0008:0009]
0001:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21481]
0001:gmdrrs.srvcls[0008:0009]
0001:gmdrrs.clicls[0008:0009]
0001:gmdrrs.closed[0008:0009]
0002:gmdcom1.accept(0005)=0008 from [10.254.62.75:60292]
0002:comarch.srvcls[0008:0009]
0002:comarch.clicls[0008:0009]
0002:comarch.closed[0008:0009]
0003:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21482]
0003:gmdrrs.srvcls[0008:0009]
0003:gmdrrs.clicls[0008:0009]
0003:gmdrrs.closed[0008:0009]
0004:gmdcom1.accept(0005)=0008 from [10.254.62.75:60340]
0004:comarch.srvcls[0008:0009]
0004:comarch.clicls[0008:0009]
0004:comarch.closed[0008:0009]
0005:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21484]
0005:gmdrrs.srvcls[0008:0009]
0005:gmdrrs.clicls[0008:0009]
0005:gmdrrs.closed[0008:0009]
0006:gmdcom1.accept(0005)=0008 from [10.254.62.75:60467]
0006:comarch.srvcls[0008:0009]
0006:comarch.clicls[0008:0009]
0006:comarch.closed[0008:0009]
0007:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21503]
0007:gmdrrs.srvcls[0008:0009]
0007:gmdrrs.clicls[0008:0009]
0007:gmdrrs.closed[0008:0009]
0008:gmdcom1.accept(0005)=0008 from [10.254.62.75:60700]
0008:comarch.srvcls[0008:0009]
0008:comarch.clicls[0008:0009]
0008:comarch.closed[0008:0009]
0009:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21521]
0009:gmdrrs.srvcls[0008:0009]
0009:gmdrrs.clicls[0008:0009]
0009:gmdrrs.closed[0008:0009]
000a:gmdcom1.accept(0005)=0008 from [10.254.62.75:60729]
000a:comarch.srvcls[0008:0009]
000a:comarch.clicls[0008:0009]
000a:comarch.closed[0008:0009]
000b:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21522]
000b:gmdrrs.srvcls[0008:0009]
000b:gmdrrs.clicls[0008:0009]
000b:gmdrrs.closed[0008:0009]
000c:gmdcom1.accept(0005)=0008 from [10.254.62.75:63061]
000c:comarch.srvcls[0008:0009]
000c:comarch.clicls[0008:0009]
000c:comarch.closed[0008:0009]
000d:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21604]
000d:gmdrrs.srvcls[0008:0009]
000d:gmdrrs.clicls[0008:0009]
000d:gmdrrs.closed[0008:0009]
000e:gmdcom1.accept(0005)=0008 from [10.254.62.75:64621]
000e:comarch.srvcls[0008:0009]
000e:comarch.clicls[0008:0009]
000e:comarch.closed[0008:0009]
000f:gmdrrs1.accept(0007)=0008 from [10.254.13.40:21758]
000f:gmdrrs.srvcls[0008:0009]
000

Re: Stick Table on Websocket - No stats data in table

2014-07-08 Thread Jai Gupta
On Tue, Jul 8, 2014 at 1:47 PM, Baptiste  wrote:

> On Fri, Jul 4, 2014 at 6:56 AM, Jai Gupta  wrote:
> > We are using stick table with Websocket. Although haproxy stats page
> shows
> > correct session rate, current session info but all counters are zero in
> > stick table.
> >
> > backend websocket
> > balance leastconn
> > stick-table type string len 12 size 32m expire 7d peers mypeers store
> >
> server_id,conn_cnt,conn_cur,sess_cnt,http_req_cnt,bytes_in_cnt,bytes_out_cnt
> > stick on hdr(host)
> > default-server on-marked-down shutdown-sessions
> > ## websocket protocol validation
> > acl hdr_connection_upgrade hdr(Connection) -i upgrade
> > acl hdr_upgrade_websocket  hdr(Upgrade)-i
> websocket
> > acl hdr_websocket_key  hdr_cnt(Sec-WebSocket-Key)  eq 1
> > acl hdr_websocket_version  hdr_cnt(Sec-WebSocket-Version)  eq 1
> > http-request deny if ! hdr_connection_upgrade !
> hdr_upgrade_websocket !
> > hdr_websocket_key ! hdr_websocket_version
> > ## websocket health checking
> > option httpchk GET / HTTP/1.1\r\nHost: abc.com
> \r\nSec-WebSocket-Version:
> > 13\r\nSec-WebSocket-Key: haproxytest6Lwghaproxyhh\r\nConnection:
> > Upgrade\r\nUpgrade: websocket http-check expect status 101
> > ## Servers
> > server  one   x.x.x.x:y check
> > server  two   x.x.x.x:y check
> > ...
> > ...
> >
> > Stick Table
> > # table: websocket, type: string, size:33554432, used:2
> > 0x1363374: key=159256323654 use=0 exp=604357344 server_id=2 conn_cnt=0
> > conn_cur=0 sess_cnt=0 http_req_cnt=0 bytes_in_cnt=0 bytes_out_cnt=0
> > 0x137eeb4: key=215334743731 use=0 exp=604523738 server_id=3 conn_cnt=0
> > conn_cur=0 sess_cnt=0 http_req_cnt=0 bytes_in_cnt=0 bytes_out_cnt=0
> >
> > Jai
>
>
> Hi Jai,
>
> You need to enabled track through the track-sc directives.
> Read the man, there are some examples about it.
>
> Baptiste
>

Thank you Baptiste. It worked. I wish there was a reference in stick-table
section under data types or where stick-table example is shown or I might
have missed it while reading stick-table related information.

Jai


Re: Detecting if the the client connected using SSL

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 10:14, Baptiste wrote:
> On Mon, Jul 7, 2014 at 12:17 PM, Dennis Jacobfeuerborn
>  wrote:
>> On 07.07.2014 08:57, Baptiste wrote:
>>> On Mon, Jul 7, 2014 at 3:48 AM, Dennis Jacobfeuerborn
>>>  wrote:
 Hi,
 I'm experimenting with the SSL capabilities of haproxy and I'm wondering
 if there is a way to detect if the client connected using SSL?

 The background is that I have two frontends one for SSL and one for
 regular http. In the SSL frontend I forward the requests to the http
 frontend via send-proxy. This part works well.
 The problem I have happens when I want to redirect non-SSL requests to SSL.
 The common way seems to be to put this in the http frontend:
 redirect scheme https if !{ ssl_fc }

 However since ALL requests arriving there are regular http requests
 (either received via port 80 or accept-proxy) this obviously ends in a
 redirect loop since ssl_fc only checks if the request received by the
 current frontend is a SSL one and not if the original request is.

 What seems to work is this:
 redirect scheme https if { dst_port eq 80 }

 This works around the problem but now I have to make sure that the port
 I check here matches the port in the bind statement.
 A cleaner way would be if I could check if the original request is a SSL
 one or not. Is this possible somehow?

 Regards,
   Dennis

>>>
>>>
>>> Hi Dennis,
>>>
>>> You should not point your SSL frontend to your clear one.
>>> Just use the clear one with a simple redirect rule to SSL one and make
>>> the SSL one point to your backend.
>>> And you're done.
>>
>> This makes sense but what I forgot to mention is that I use a
>> configuration trick posted here a while ago where I bind SSL frontend to
>> several cores to do the SSL offloading and then proxy the requests to
>> the http frontend which is bound to a single core to do the
>> load-balancing/ha/stats. If I remember correctly then doing the actual
>> handling of that stuff on multiple cores is not recommended.
>> This is the frontend config I use currently:
>>
>> listen front-https
>> bind-process 2-4
>> bind 10.99.0.200:443 ssl crt /etc/pki/tls/certs/testcert.chain.pem
>> ciphers
>> ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM
>> no-sslv3
>> reqadd X-Forwarded-Proto:\ https
>> server clear abns@ssl-proxy send-proxy
>>
>> frontend front1
>> bind-process 1
>> bind 10.99.0.200:80
>> bind abns@ssl-proxy accept-proxy
>> redirect scheme https if { dst_port eq 80 }
>> default_backend back1
>>
>> Regards.
>>   Dennis
>>
> 
> Hi Dennis,
> 
> The answer stands in your frontend definition.
> You properly set a X-Forwarded-Proto headers.
> So to take your redirect decision, just look for it!
> 
> Note that I moved your configuration to the new http-request rules.
> 
> listen front-https
>  [...]
>  http-request set-header X-Forwarded-Proto https
>  server clear abns@ssl-proxy send-proxy
> 
> frontend front1
> [...]
> http-request redirect scheme https if { req.hdr(X-Forwarded-Proto) https }
> 

Well duh, I should have thought of that. Thanks!
With this everything works as intended (just had to negate the condition
for the redirect).

Regards,
  Dennis



Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-08 Thread Willy Tarreau
Hi James,

On Tue, Jul 08, 2014 at 10:14:57AM -0400, James Westby wrote:
> I like the short names after the field names, so I've implemented
> that. I also rebased on top of master (turns out the github mirror was
> outdated), and included the listener information.

Great they applied fine this time and I also prefer the way they look
now.

> Also, I'm excited for the point when I'll be able to use the new *time
> stats, they will be very nice to have.

Just keep in mind that this is an infinite sliding window and that you
will get within 95% accuracy for the previous 1024 samples. Just do a
"git log -2 4bfc580dd" to get all the information about the algorithm
involved if you want to understand better. I tried this algorithm for
the first time something like 12 years ago (for "inject") but failed
at it and always postponed investigations to fix it. Then I wanted to
try it again for the stats page but remembered it required to be fixed
and would never assign the time to work on it again. And finally I had
some fun with it during the week-end just before the 1.5 release and
that allowed me to remove an item from my todo list :-)

So please do not hesitate to report your observations. Just keep in
mind that the average output value reflects the latest measures much
more than the oldest ones. Once you have that in mind, you know what
you observe.

Thanks,
Willy




Re: Abstract namespace sockets handling

2014-07-08 Thread hodor
Hello,

On 2014-07-08, 15:26:32, Willy Tarreau wrote:
> Not exactly, I know what's happening, you have a frontend which had both
> a unix socket and an abstract socket. When resuming, the abstract socket
> failed and the proxy was marked in error so polling was not re-enabled on
> its listeners. I still have to see how far we can go to change that. It's
> very tricky as we don't want to leave a process in a bad state which will
> never stop for example. Initially when the soft restart was implemented,
> we were not supposed to have multiple processes listening :-)
> 
> The pause/resume operations for unix sockets are different than those
> for other protocols because a file system access is needed, so they're
> performed by the new process.
> 
> > Perhaps this could be solved by delaying the rename(tempname, path) and
> > unlink(backname) after all else is done? Something like .bind_finish()
> > and .bind_rollback() in struct protocol, where .bind_finish() would be
> > for "all is okay" and .bind_rollback() for "something else failed,
> > return the socket to the old haproxy instance"? Those functions could be
> > called after we are reasonably sure nothing else can fail.
> 
> All that is properly done. Check your config to ensure you're not in
> the case above, or alternately, comment out the "fail = 1" statement
> at liine 841 in proxy.c and you will see this annoying behaviour go
> by itself.

I think we are talking about different problems. The one I mentioned
doesn't even need abstract sockets at all. It just needs some other
thing to fail after we have already made the link(), bind(), rename(),
unlink() stuff.

Let's have these two config files:

conf1:

---
global
  pidfile /tmp/proxy/pid

defaults
  mode tcp

listen test1
  bind unix@/tmp/test1.sock
  server test1 127.0.0.1:22
---


conf2:

---
global
  pidfile /tmp/proxy/pid

defaults
  mode tcp

listen test1
  bind unix@/tmp/test1.sock
  server test1 127.0.0.1:22

listen test2
  bind ipv4@127.0.0.1:22
  server test2 127.0.0.1:23
---

First start the first one (deamon necessary for pid file):

./haproxy -f conf1 -D

"socat stdio unix-connect:/tmp/test1.sock" now works and connects to the
local SSH.

Now we try to reload haproxy with the second config:

./haproxy -f conf2 -p pid -D -sf `cat pid`

The whole new haproxy instance will fail as port 22 is occupied by SSH
and cannot be bound. The new instance unlink()ed the original
/tmp/test1.sock, so the old instance, although running, is now
effectively useless.

I tried with and without the "fail = 1" statement present. Did I miss
something?

I realize it is not entirely fair to change the config this way :). It
is not a problem for me. I just wanted to point out this can happen.


Thanks,

-- 
hodor



Re: Abstract namespace sockets handling

2014-07-08 Thread Willy Tarreau
Hi,

On Tue, Jul 08, 2014 at 10:36:10PM +0200, hodor wrote:
> Hello,
> 
> On 2014-07-08, 15:26:32, Willy Tarreau wrote:
> > Not exactly, I know what's happening, you have a frontend which had both
> > a unix socket and an abstract socket. When resuming, the abstract socket
> > failed and the proxy was marked in error so polling was not re-enabled on
> > its listeners. I still have to see how far we can go to change that. It's
> > very tricky as we don't want to leave a process in a bad state which will
> > never stop for example. Initially when the soft restart was implemented,
> > we were not supposed to have multiple processes listening :-)
> > 
> > The pause/resume operations for unix sockets are different than those
> > for other protocols because a file system access is needed, so they're
> > performed by the new process.
> > 
> > > Perhaps this could be solved by delaying the rename(tempname, path) and
> > > unlink(backname) after all else is done? Something like .bind_finish()
> > > and .bind_rollback() in struct protocol, where .bind_finish() would be
> > > for "all is okay" and .bind_rollback() for "something else failed,
> > > return the socket to the old haproxy instance"? Those functions could be
> > > called after we are reasonably sure nothing else can fail.
> > 
> > All that is properly done. Check your config to ensure you're not in
> > the case above, or alternately, comment out the "fail = 1" statement
> > at liine 841 in proxy.c and you will see this annoying behaviour go
> > by itself.
> 
> I think we are talking about different problems. The one I mentioned
> doesn't even need abstract sockets at all. It just needs some other
> thing to fail after we have already made the link(), bind(), rename(),
> unlink() stuff.
> 
> Let's have these two config files:
> 
> conf1:
> 
> ---
> global
>   pidfile /tmp/proxy/pid
> 
> defaults
>   mode tcp
> 
> listen test1
>   bind unix@/tmp/test1.sock
>   server test1 127.0.0.1:22
> ---
> 
> 
> conf2:
> 
> ---
> global
>   pidfile /tmp/proxy/pid
> 
> defaults
>   mode tcp
> 
> listen test1
>   bind unix@/tmp/test1.sock
>   server test1 127.0.0.1:22
> 
> listen test2
>   bind ipv4@127.0.0.1:22
>   server test2 127.0.0.1:23
> ---
> 
> First start the first one (deamon necessary for pid file):
> 
> ./haproxy -f conf1 -D
> 
> "socat stdio unix-connect:/tmp/test1.sock" now works and connects to the
> local SSH.
> 
> Now we try to reload haproxy with the second config:
> 
> ./haproxy -f conf2 -p pid -D -sf `cat pid`
> 
> The whole new haproxy instance will fail as port 22 is occupied by SSH
> and cannot be bound. The new instance unlink()ed the original
> /tmp/test1.sock, so the old instance, although running, is now
> effectively useless.
> 
> I tried with and without the "fail = 1" statement present. Did I miss
> something?
> 
> I realize it is not entirely fair to change the config this way :). It
> is not a problem for me. I just wanted to point out this can happen.

We were clearly talking about the same thing, but I don't experience
this with my configs, it works perfectly and correctly restores the
old socket when leaving. I'll have to retry with your config, I would
not be surprized to meet a corner case.

Willy




Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-08 Thread James Westby
Willy Tarreau  writes:
> Great they applied fine this time and I also prefer the way they look
> now.

Thanks, glad to see this merged.

Plus I have to compliment you on your response to a first time
contributor.

Thanks,

James



Re: Using the socket interface to access ACLs

2014-07-08 Thread Thierry FOURNIER
On Thu, 3 Jul 2014 14:59:46 +0200
Baptiste  wrote:

> On Thu, Jul 3, 2014 at 2:24 PM, Thierry FOURNIER  
> wrote:
> > On Tue, 1 Jul 2014 23:00:13 +0200
> > Baptiste  wrote:
> >
> >> On Tue, Jul 1, 2014 at 10:54 PM, William Jimenez
> >>  wrote:
> >> > Hello
> >> > I am trying to modify ACLs via the socket interface. When I try to do
> >> > something like 'get acl', I get an error:
> >> >
> >> > Missing ACL identifier and/or key.
> >> >
> >> > How do I find the ACL identifier or key for a specific ACL? I see the 
> >> > list
> >> > of ACLs when i do a 'show acl', but unsure which of these values is the 
> >> > file
> >> > or key:
> >> >
> >> > # id (file) description
> >> > 0 () acl 'always_true' file '/etc/haproxy/haproxy.cfg' line 19
> >> > 1 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 20
> >> > 2 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 21
> >> > 3 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 22
> >> >
> >> > Thanks
> >>
> >> Hi William,
> >>
> >> In order to be able to update ACL content, they must load their
> >> content from a file.
> >> The file name will be considered as a 'reference' you can point to
> >> when updating content.
> >> Don't forget to update simultaneously the content from an ACL and from
> >> the flat file to make HAProxy reload reliable :)
> >>
> >> Baptiste
> >>
> >
> > Hi
> >
> > You can modify ACL without file. The identifier is the number prefixed
> > by the char '#', like this:
> >
> >add acl #1 127.0.0.1
> >
> > get acl is used to debug acl.
> >
> > Thierry
> >
> >
> 
> Yes, but acl number is not reliable, since it can change in time.
> Furthermore, it's easier to update content of a flat file than
> updating ACL values in HAproxy's configuration.

Absolutely not: you can fix the id of the acl with the "-u" flag. With
this, the acl have a fix numeric identifier. This method permits to
have reliable ACL pattern without a fake file. example:

   acl myacl hdr(Host) -u 10 www.foo.com www.bar.com

In this example, the value "10" is the unique identifier of the acl.
You can see the documentation about ACL in the famous cyril haproxy web
documention:

   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.1

Thierry




Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-08 Thread Willy Tarreau
On Tue, Jul 08, 2014 at 05:15:09PM -0400, James Westby wrote:
> Willy Tarreau  writes:
> > Great they applied fine this time and I also prefer the way they look
> > now.
> 
> Thanks, glad to see this merged.
> 
> Plus I have to compliment you on your response to a first time
> contributor.

Hehe, next time you won't be anymore a first time contributor, be
careful, I make efforts only once :-)

cheers,
Willy