RE: Track headers with tcp-request in listen only work with if HTTP

2013-09-02 Thread Ricardo F
Hello Willy,


Now it have sense. This is a very clever use of a condition! 


Thanks for your time.

Ricardo F.





 Date: Sat, 31 Aug 2013 08:54:49 +0200
 From: w...@1wt.eu
 To: ri...@hotmail.com
 CC: haproxy@formilux.org
 Subject: Re: Track headers with tcp-request in listen only work with if HTTP

 Hello Ricardo,

 On Thu, Aug 22, 2013 at 01:03:32PM +0200, Ricardo F wrote:
 Hello,

 I have been testing the connection tracking in the frontend based on headers,
 but it only work if the if HTTP option is set:

 tcp-request inspect-delay 10s
 tcp-request content track-sc0 hdr(x-forwarded-for,-1) if HTTP

 Without this option, the table doesn't fill, the connections aren't tracked.

 This is normal, and due to the way the rules are evaluated.

 The tcp-request rules are a list of actions each having an optional condition.
 The principle is to run over the rules and :
 - if the condition is false, skip the rule
 - if the condition is true, apply the rule
 - if the condition is unknown, wait for more data

 Then when the rule is applied, it is performed whatever the type of rule 
 (track
 or reject etc).

 As you can see, when doing if HTTP, we abuse the condition mechanism to
 ensure that the request buffer contains a complete HTTP request. It first
 waits for data because until there are enough data in the buffer, we can't
 tell whether it's HTTP or not, and when we can tell, the data you want to
 track are available.

 If you try to track missing data, the track action is ignored (which allows
 you to have multiple track actions on different data and track on the first
 one which matches).

 I think it will be possible in the future to have the actions automatically
 wait for the data by themselves since now (recently) we know if we're missing
 something or not when doing the track action. But before doing so, I'd like
 to ensure that we don't break some setups by doing so, typically the ones
 that rely on the behaviour described above. If all fetch functions correctly
 return not found that should be OK. I just want to be sure that none will
 accidentely return not found YET in some corner cases that could be found
 in valid setups.

 Best regards,
 Willy

 


RE: Issue with tcp-request content and keep alive

2013-09-02 Thread Ricardo F
Hello Willy,


Thanks for your reply and the solution suggested, i just try it and it work as 
expected. But, in my particular case, ia'ts a bit mess putting this conf, 
duplicating various lines in a dozen of backends.

For the moment i will continue with keep alive disabled and when the option is 
available, i will be one of the first that trying it.

I understand the difficult to full implement the http-request track-sc, really 
is very tricky.


Thanks for your time.


Ricardo F.




 Date: Sat, 31 Aug 2013 09:20:29 +0200
 From: w...@1wt.eu
 To: ri...@hotmail.com
 CC: haproxy@formilux.org
 Subject: Re: Issue with tcp-request content and keep alive

 Hello Ricardo,

 On Fri, Aug 30, 2013 at 11:27:45AM +0200, Ricardo F wrote:
 Hello,

 I have an issue when trying to track a connection based on a header, with
 tcp-request, and with keep alive enable in a listen section.
 Over the haproxy i have a cdn, which pass the ip of the client at the
 beginning of the X-Forwarded-For header. All the requests are pass through
 this cdn.

 This is the configuration:

 global
 maxconn 1000
 log 127.0.0.1 local5 info err
 stats socket /var/run/haproxy.sock mode 0600 level admin
 pidfile /var/run/haproxy.pid

 defaults
 mode http
 log global
 retries 3
 option redispatch
 timeout connect 5s
 timeout client 10s
 timeout server 10s
 timeout http-keep-alive 60s
 timeout http-request 5s

 listen proxy-http 192.168.1.100:80
 mode http
 maxconn 1000
 balance roundrobin
 stats enable
 option httplog
 option http-server-close
 #option httpclose
 option forwardfor

 stick-table type ip size 128m expire 30m store gpc0
 tcp-request inspect-delay 5s
 tcp-request content track-sc0 req.hdr_ip(X-Forwarded-For,1) if HTTP

 acl rule_marked_deny sc0_get_gpc0 gt 0

 use_backend back-deny if rule_marked_deny

 default_backend back-http

 backend back-deny
 server web-deny 192.168.1.133:80

 backend back-http
 server web-http 192.168.1.101:80


 With this conf, all the requests with the header X-Forwarded-For are tracked
 in the sc0 counter with the ip included in it.

 If the counter of one ip is update to number one, the request will be send to
 back-deny, this is doing by writing directly in the unix socket from other
 software. Like the example:

 # echo set table proxy-http key 88.64.32.11 data.gpc0 1 | socat stdio 
 /var/run/haproxy.sock

 Since the moment that this are doing (with keep alive enable) i see that in
 the log of the web-deny backserver (the log are modified for register the
 x-forwarded-for ip instead of the real tcp connection):

 88.64.32.11 - - [30/Aug/2013:09:08:22 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 157.55.32.236 - - [30/Aug/2013:09:08:27 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 88.64.32.11 - - [30/Aug/2013:09:08:27 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 157.55.32.236 - - [30/Aug/2013:09:08:28 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 88.64.32.11 - - [30/Aug/2013:09:08:29 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 157.56.93.186 - - [30/Aug/2013:09:08:31 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208
 157.56.93.186 - - [30/Aug/2013:09:08:31 +0200] www.server.com GET /some/url 
 HTTP/1.1 301 208

 As can see, there are other ips there and only one is with the 1 in the
 table of the Haproxy. This is a small piece of log, but when i try that in a
 server with more traffic, the problem is worse, more ips are redirected to
 this backend without marked for it.

 But, if i change the listen secion to option httpclose, all works well,
 only the marked ips are redirected. Problem solved, but why?

 The tcp inspect have problems tracking the request when these are passed
 through the cdn, which route more than one request of various clients in the
 same tcp connection?

 I like your detailed analysis, you almost found the reason. This is because
 tcp-request inspects traffic at the beginning of a *session*, not for each
 request. BUT! there is a trick to help you do what you need.

 A tcp-request rule put in a backend will be evaluated each time a session
 is transferred to a backend. Since the keep-alive with the client is handled
 in the frontend, each new request will cause the session to be connected to
 the backend, and the tcp-request rules in the backend will see all requests
 (which is another reason why server-side keep-alive is a nightmare to
 implement).

 So I suggest that you split your listen into frontend + backend and
 move the tcp-request rule in the backend.

 I know, you'll tell me but I can't put a use_backend rule in a backend.
 Then simply use use-server with a server weight of zero, which will never
 be used by regular traffic.

 Probably the next feature (in the roadmap) http-request track-sc will solve
 this?

 Yes definitely. However for having looked at how to implement it, the
 remaining hair on my head stood up straight and remained like this for
 2 days :-)

 The real 

Re: Limits for physical server

2013-09-02 Thread Baptiste
Hi,

This is not easily doable out of the box, but some workarounds may be doable.
Please let me know the few information below:
- Do you need persistence?
- how many servers?
- how many backends?
- how do you take routing decision between backends

Baptiste


On Mon, Sep 2, 2013 at 11:15 AM, Andreas Mock andreas.m...@drumedar.de wrote:
 Hi all,

 I'm not sure if the following is doable:

 I have several servers (processes providing services) on
 one physical server. Is there a way to limit the count
 of connections for the physical server?

 backend num1
 server1 IP:Port1
 server2 IP:Port1
 backend num2
 server1 IP:Port2
 server2 IP:Port2

 And I want to limit resources based on
 the entities server1, server2 while sharing
 their resources among the backends.

 Hint appreciated.

 Best regards
 Andreas Mock





AW: Limits for physical server

2013-09-02 Thread Andreas Mock
Hi Baptiste,

the answers to your questions:

1) No persistence needed. http(s)-Proxy (1.5.x)
2) 6 + x physical servers, 97 frontend services (IP-Port-Combinations), 
and almost any frontend service can be served by a service on the physical
server.
3) currently round robin. Open for other advice.

Best regards
Andreas Mock

P.S.: Would a logical grouping of servers (in terms of HA)
to server groups with the ability to have config variables
for server groups a meaningful feature request?


-Ursprüngliche Nachricht-
Von: Baptiste [mailto:bed...@gmail.com] 
Gesendet: Montag, 2. September 2013 11:50
An: Andreas Mock
Cc: haproxy@formilux.org
Betreff: Re: Limits for physical server

Hi,

This is not easily doable out of the box, but some workarounds may be doable.
Please let me know the few information below:
- Do you need persistence?
- how many servers?
- how many backends?
- how do you take routing decision between backends

Baptiste


On Mon, Sep 2, 2013 at 11:15 AM, Andreas Mock andreas.m...@drumedar.de wrote:
 Hi all,

 I'm not sure if the following is doable:

 I have several servers (processes providing services) on
 one physical server. Is there a way to limit the count
 of connections for the physical server?

 backend num1
 server1 IP:Port1
 server2 IP:Port1
 backend num2
 server1 IP:Port2
 server2 IP:Port2

 And I want to limit resources based on
 the entities server1, server2 while sharing
 their resources among the backends.

 Hint appreciated.

 Best regards
 Andreas Mock






Re: Load Balance individual requests

2013-09-02 Thread Kevin C

Le 31/08/2013 09:10, Willy Tarreau a écrit :

On Thu, Aug 29, 2013 at 05:43:48PM +0200, Kevin COUSIN wrote:

Very good guid, I will follow it.

Thanks a lot !

You can thank Baptiste for this great one, and us for hearing him complain
about the complex setup for all the time it took him to test over and over
to ensure that what he wrote really works out of the box :-)

Willy


Hi,

I follow this excellent guide (thanks to Baptiste ) but I have an issue. 
When I try to get the certificate on the 5061 port, I can't get it 
throught HAproxy.


 openssl s_client -connect 10.250.0.80:5061
CONNECTED(0003)
139851101718160:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake 
failure:s23_lib.c:177:

---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 322 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE

But I can get it if I request the Edge Servers directly. I use HAproxy 
1.5-dev19.




RE: https with haproxy

2013-09-02 Thread Rezhna Hoshyar
Dear,

Could you please tell me how I can get free ssl certificate as I tried many 
ways mentioned on Internet , but none of them were useful

Rezhna 

-Original Message-
From: Baptiste [mailto:bed...@gmail.com] 
Sent: Sunday, September 1, 2013 9:44 PM
To: Rezhna Hoshyar
Cc: Lukas Tribus; haproxy@formilux.org
Subject: Re: https with haproxy

Hi Rezhna,

Use the http-request redirect scheme to do this, as example:
http-request redirect scheme https if ! { ssl_fc }

It will force HTTPs whatever the hostname is.
As Lukas stated, you have to own the certificate and the frontend / backend 
must be in mode http.

Baptiste



On Sun, Sep 1, 2013 at 4:56 PM, Rezhna Hoshyar rezhna.hosh...@fanoos.iq wrote:

 Hi,

 Actually we want to apply it for our company web sites.

 Rezhna

 -Original Message-
 From: Lukas Tribus [mailto:luky...@hotmail.com]
 Sent: Sunday, September 1, 2013 5:44 PM
 To: Rezhna Hoshyar
 Cc: haproxy@formilux.org
 Subject: RE: https with haproxy

 Hi,

 My question is about how to use https with haproxy , not avoiding it.

 Compile haproxy 1.5 with SSL support and enable it. You can find details in 
 doc/ and some generic examples in examples/.



 I can use haproxy to redirect http://google.com to http://yahoo.com, 
 but I cannot do that with https://google.com.

 Well, do you have a certificate for google.com (or whatever website you need 
 to redirect)? You cannot do this without a valid certificate, otherwise HTTPS 
 would not make any sense.



 Regards,

 Lukas

 --
 This message has been scanned for viruses and dangerous content by 
 MailScanner, and is believed to be clean.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




RE: Load Balance individual requests

2013-09-02 Thread Lukas Tribus
Hi!


 I follow this excellent guide (thanks to Baptiste ) but I have an issue. 
 When I try to get the certificate on the 5061 port, I can't get it 
 throught HAproxy.
 
 openssl s_client -connect 10.250.0.80:5061
 CONNECTED(0003)
 139851101718160:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake 
 failure:s23_lib.c:177:

Looks like 5061 is a plaintext port? Did you configure the bind line with
the ssl keyword and the appropriate certificate?



Lukas 


Re: Load Balance individual requests

2013-09-02 Thread Kevin C

Le 02/09/2013 15:07, Lukas Tribus a écrit :

Hi!


Hi !

I follow this excellent guide (thanks to Baptiste ) but I have an issue.
When I try to get the certificate on the 5061 port, I can't get it
throught HAproxy.
  
openssl s_client -connect 10.250.0.80:5061

CONNECTED(0003)
139851101718160:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake 
failure:s23_lib.c:177:

Looks like 5061 is a plaintext port? Did you configure the bind line with
the ssl keyword and the appropriate certificate?


Here is the configuration  :

frontend fe_edge_pool_external_access
timeout client 30m
mode tcp
bind 10.250.0.80:443 name https
bind 10.250.0.80:5061 name sip
default_backend bk_edge_pool_external_access

Does HAproxy  pass TCP connection directly to backend ?


Lukas   

Kevin C



RE: Load Balance individual requests

2013-09-02 Thread Lukas Tribus
Hi!


 Does HAproxy pass TCP connection directly to backend?

It depends ... can you show the configuration of the backend as well?

Regards,
Lukas 


Re: Load Balance individual requests

2013-09-02 Thread Kevin C

Le 02/09/2013 16:09, Lukas Tribus a écrit :

Hi!



Does HAproxy pass TCP connection directly to backend?

It depends ... can you show the configuration of the backend as well?

Sure,
Here is the configuration :

backend bk_edge_pool_external_access
timeout server 30m
timeout connect 5s
mode tcp
balance leastconn
source 0.0.0.0 usesrc clientip
stick on src table  _edge_pool_external_persistence
default-server inter 5s fall 3 rise 2 on-marked-down 
shutdown-sessions
server LEDG02002-81 10.250.0.81:5061 weight 10 check observe 
layer4 port 5061 check-ssl
server LEDG02003-82 10.250.0.82:5061 weight 10 check observe 
layer4 port 5061 check-ssl




Regards,
Lukas   

Regards,

Kevin C



RE: Load Balance individual requests

2013-09-02 Thread Lukas Tribus
Hi!


 source 0.0.0.0 usesrc clientip

So you are using using TPROXY mode. Does your network configuration allow
that?

Can you try without TPROXY mode? Just remove the source line and retry.




Regards,

Lukas 


Re: Load Balance individual requests

2013-09-02 Thread Kevin C

Le 02/09/2013 16:26, Lukas Tribus a écrit :

Hi!



source 0.0.0.0 usesrc clientip

So you are using using TPROXY mode. Does your network configuration allow
that?

Can you try without TPROXY mode? Just remove the source line and retry.

Yes, it works. It don't know if I must set up a TPROXY for Load 
balancing Lync Edge Servers.



Regards,

Lukas   





Re: https with haproxy

2013-09-02 Thread Baptiste
Rezhna,

You can start with a script I used when I wrote some blog articles
about HAProxy and SSL:
https://github.com/exceliance/haproxy/tree/master/blog/ssl_client_certificate_management_at_application_level

You'll be able to generate selfsigned certificates.

Good luck,
Baptiste



On Mon, Sep 2, 2013 at 2:59 PM, Nick Jennings n...@silverbucket.net wrote:
 http://www.startssl.com



 On Mon, Sep 2, 2013 at 2:51 PM, Rezhna Hoshyar rezhna.hosh...@fanoos.iq
 wrote:

 Dear,

 Could you please tell me how I can get free ssl certificate as I tried
 many ways mentioned on Internet , but none of them were useful

 Rezhna

 -Original Message-
 From: Baptiste [mailto:bed...@gmail.com]
 Sent: Sunday, September 1, 2013 9:44 PM
 To: Rezhna Hoshyar
 Cc: Lukas Tribus; haproxy@formilux.org
 Subject: Re: https with haproxy

 Hi Rezhna,

 Use the http-request redirect scheme to do this, as example:
 http-request redirect scheme https if ! { ssl_fc }

 It will force HTTPs whatever the hostname is.
 As Lukas stated, you have to own the certificate and the frontend /
 backend must be in mode http.

 Baptiste



 On Sun, Sep 1, 2013 at 4:56 PM, Rezhna Hoshyar rezhna.hosh...@fanoos.iq
 wrote:
 
  Hi,
 
  Actually we want to apply it for our company web sites.
 
  Rezhna
 
  -Original Message-
  From: Lukas Tribus [mailto:luky...@hotmail.com]
  Sent: Sunday, September 1, 2013 5:44 PM
  To: Rezhna Hoshyar
  Cc: haproxy@formilux.org
  Subject: RE: https with haproxy
 
  Hi,
 
  My question is about how to use https with haproxy , not avoiding it.
 
  Compile haproxy 1.5 with SSL support and enable it. You can find details
  in doc/ and some generic examples in examples/.
 
 
 
  I can use haproxy to redirect http://google.com to http://yahoo.com,
  but I cannot do that with https://google.com.
 
  Well, do you have a certificate for google.com (or whatever website you
  need to redirect)? You cannot do this without a valid certificate, 
  otherwise
  HTTPS would not make any sense.
 
 
 
  Regards,
 
  Lukas
 
  --
  This message has been scanned for viruses and dangerous content by
  MailScanner, and is believed to be clean.
 
 

 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.






Re: Limits for physical server

2013-09-02 Thread Baptiste
Hi Andreas,

My last question was more related to how within HAProxy, you decided
to forward one request to a particular backend.
What criteria are you using?

Anyway, your numbers are huge and so no simple workarounds may apply.

And unfortunately, the maxconn server parameter can't be changed using
HAProxy socket.

I'm sorry I can't help here.

Baptiste

On Mon, Sep 2, 2013 at 2:00 PM, Andreas Mock andreas.m...@drumedar.de wrote:
 Hi Baptiste,

 the answers to your questions:

 1) No persistence needed. http(s)-Proxy (1.5.x)
 2) 6 + x physical servers, 97 frontend services (IP-Port-Combinations),
 and almost any frontend service can be served by a service on the physical
 server.
 3) currently round robin. Open for other advice.

 Best regards
 Andreas Mock

 P.S.: Would a logical grouping of servers (in terms of HA)
 to server groups with the ability to have config variables
 for server groups a meaningful feature request?


 -Ursprüngliche Nachricht-
 Von: Baptiste [mailto:bed...@gmail.com]
 Gesendet: Montag, 2. September 2013 11:50
 An: Andreas Mock
 Cc: haproxy@formilux.org
 Betreff: Re: Limits for physical server

 Hi,

 This is not easily doable out of the box, but some workarounds may be doable.
 Please let me know the few information below:
 - Do you need persistence?
 - how many servers?
 - how many backends?
 - how do you take routing decision between backends

 Baptiste


 On Mon, Sep 2, 2013 at 11:15 AM, Andreas Mock andreas.m...@drumedar.de 
 wrote:
 Hi all,

 I'm not sure if the following is doable:

 I have several servers (processes providing services) on
 one physical server. Is there a way to limit the count
 of connections for the physical server?

 backend num1
 server1 IP:Port1
 server2 IP:Port1
 backend num2
 server1 IP:Port2
 server2 IP:Port2

 And I want to limit resources based on
 the entities server1, server2 while sharing
 their resources among the backends.

 Hint appreciated.

 Best regards
 Andreas Mock






Re: Issue with 1.5-dev19 and acl foo sc1_inc_gpc0 gt 0 in backend

2013-09-02 Thread Baptiste
Hi toni,

Maybe you can use a dummy tracking backend which is pointed by all
your backends.
But it means the counters will be incremented whatever backend the
clients passed through (maybe it's not an issue).

And I'm not even sure it can work.

Baptiste


On Mon, Sep 2, 2013 at 8:27 AM, Toni Mattila t...@solu.fi wrote:
 Hi,


 On 2.9.2013 8:55, Willy Tarreau wrote:

   backend web29
   stick-table type ip size 50k expire 120m store
 gpc0,http_req_rate(120s)
   tcp-request content track-sc2  src if METH_POST
   stick store-request srcif METH_POST
   acl bruteforce_detection  sc2_http_req_rate gt 5
   acl foo sc2_inc_gpc0 gt 0
   http-request deny if foo bruteforce_detection
   server web29 94.199.58.249:80 check
 I think that with the fix above it will work. BTW, you don't need
 the stick store-request statement, but I suspect you used it to
 debug the issue.


 This works on backend side.. but how do I get that sc2_get_gpc0 working on
 frontend?

 Idea is that I will have multiple backends but once one backend detects
 certain IP being over the limit it would be blocked already on the frontend.

 Some reason the acl flagged_as_abuser sc2_get_gpc0 gt 0 doesn't now
 evaluate true when using:
 use_backend bk_login_abusers if flagged_as_abuser


 Thanks in advance,
 Toni Mattila






Re: Debugging Backendforwarding and UP status

2013-09-02 Thread Baptiste
My answers inline.

On Fri, Aug 30, 2013 at 11:30 AM, Sebastian Fohler i...@far-galaxy.de wrote:
 at first, sorry, I meant to say hi, but I had a very long night and it seems
 I have missed it.

sorry on my side as well, but I'm fed up by unpolite people which asks
for help but don't say hi or thanks or even if the solution works.

 About the html. Thunderbird has a default html and txt message setting by
 default, normaly I change that, but as I said, I had a long night. The next
 time I'll remember that.

Thanks.

 Concerning the load balancing, I have experience with load balancing, and
 yes I knew it was a backend Problem.

So why pointing HAProxy ???
Your sentence: it's definitly a problem of haproxy shuting down the backends


 Most of the backends have been shown as
 down in my stats, as I already written in my last message. The only thing I
 thought strange was, that one was shown up and still got me that 503 error.

503 is a consequence of no servers available.


 About that debugging, that was the question. How much information does
 HAProxy provide to find the error concerning those backend health checks and
 shuting down those systems.
 I've set the log to debug mode but everything I got were this sort of log
 entries:

 Aug 30 09:48:49 localhost haproxy[17568]: Connect from 81.44.136.142:54570
 to 192.168.48.12:80 (www.adworxs.net-merged/HTTP)

enable health check logging and turn on http logs.
You'll have very useful information then.

 So I couldn't find the reason, why all the backends have been shutdown.
 Obviously cause the check thought they were not availabe, but the problem
 is, that the same configuration has been working already.

So why pointing HAProxy 

 I had a network problem yesterday and had to reboot those haproxy systems,
 since that moment none of the websites configured did work anymore.

can you let us know why a network issue is the source of a system reboot?
What type of issue were you experiencing?
Since when a reboot fix issues on linux?

 So my question was, which log interface gives me the correct information
 about the checks and what would be the best way to analyze this problem.

Willy explained you in a other mail, tcpdump is your best friend.
If for some reason, on any tool, you can't get a debugging mode or you
don't know how to enable it, then tcpdump.
You'd have seen HAProxy health check request and server response. and
I guess in a few second you'll have discovered where the problem is in
the response.


Sorry, but somebody who disables health check because they shutdown
the servers deserve some LBing training :p
Health check is here to ensure the server is available. If the health
check doesn't pass, the traffic is not supposed to pass too... So
definitely, disabling heath checking could not have been the solution
to your problem.

Baptiste


 Thank you so far.
 Best regards
 Sebastian


 On 30.08.2013 07:38, Baptiste wrote:

 Sebastian,

 1. when you talk to a ML, you should say 'Hi'
 2. when you talk to a ML, you shouldn't send HTML mails

 Now, I can see you have absolutely no experience with Load-Balancing.
 Here are a few clues for you:
 - when you have a 503 error, then no need to think, it means ALL the
 servers from the farm are seen DOWN
 - the purpose of the health check is to ensure the service is UP and
 RUNNING on the servers
 - Usually, it is a good idea to enable health checking when
 load-balancing, to allow haproxy to know server status to avoid
 sending client requests to dead servers
 - instead of disabling health checking, you should be troubleshooting
 it: HAProxy logs will tell you why the health check was not working.

 Good luck,

 Baptiste


 On Fri, Aug 30, 2013 at 6:19 AM, Sebastian Fohleri...@far-galaxy.de
 wrote:

 Ok, I disabled the health check and it's working now, so it's definitly a
 problem of haproxy shuting down the backends.

 On 30.08.2013 05:55, Sebastian Fohler wrote:

 Some help, would be to disable the health check for the time being, is
 that
 possible.
 At least it would be a quickfix.

 On 30.08.2013 05:25, Sebastian Fohler wrote:

 Is there some simple way to find out why I get this error from my haproxy
 cluster?

 503 Service Unavailable

 No server is available to handle this request.

 It looks like all my backend servers are down. Even in pools which are
 shown
 as up in my stats.
 How can I debug that sensible?

 Thank you in advance.
 Best regards
 Sebastian








Re:RE: Epiphone guitar DR-100NA DR-100VS avalible price 55 USD

2013-09-02 Thread musik

  
  

  DearPurchasingManager 6on55JEP8Q3XTw8
  Howareyou?1w6aW26gNSecK5B
  Furui-giftsmanufacturerhere(furui-gifts.com).H
  Aspecializedmanufacturerandexporterwith8yearsexperienceforguitarsandukulelesinChina.zWiththecustomersof,%RAEpiphone,Lanikaiandhopetofindawaytocooperatewithyou.E-cataloguewillbesentuponreceiptyourreply%Callme,let'stalkdetails.6lManythankswarmestregards7Zems266zVH
  
  BestregardsiTS3Sb70j872c7
  KamaIbXID3ANJ97o2Z2
  ..rggTD0cL3Nt50uT59a5QYhhqCompany:FuRuiGiftsManufactureCo.,Ltd3ZotTgZ9oq2j
  Website: www.furui-gifts.com86PkWuM7[10-15
  Mob:(0)152-1725-2928odtOsVUMM0WFE-mail:sal...@furui-gifts.comsal...@furui-gifts.com
  Address:huangpuindustrialarea,qiuchangtown,huizhoucity,guangdong,China349sX5Lw5iyrUOh5zN3197
  hov1L7P8FC2LWcGL48YfSle7SOyj793z55kcitCBIi9evDWXQxehPM2ckC9QuiT
  4t7dM49Nq6Mer92dQRRpbieW64nwzNzprJAnqIme505U0H1Zke729xzrIuc9R1lH2
  8RDj2x8Fs1VY1vA7yXaZvB9MH8G9VRGoN77nuqt1yEi4msDxHaNqK2W42a0ZJ
  SSjvYBLcuy0m761JkTw3iD7c6pz212oBGMamqzXLnkEr5Qt1yZ5w5888k

  


откорректируйте зрение без заморочек

2013-09-02 Thread ivan.doktor.2012
Вы сможете располагать зорким зрением http://aka.gr/0zv5c


send-proxy on FreeBSD

2013-09-02 Thread David BERARD
Hi,

I've an issue with send-proxy on HAProxy-1.5-dev19 running on FreeBSD.
 
Since dev13 I can't get send-proxy to work on FreeBSD, connections to 
the backend server (another haproxy with accept-proxy bind option) are 
imediately closed.

Version dev12 works correctly on FreeBSD, and dev19 on Linux works too.

Connection seem to be closed in stream_interface.c (out_error) :

 470 if (ret  0) {
 471 if (errno == EAGAIN)
 472 goto out_wait;
 473 goto out_error;   
 474 }


# cat /usr/local/etc/haproxy.conf 
global
log /var/run/log local0 debug
maxconn 4096
uid 99
gid 99
daemon

defaults 
log global
contimeout  5000
clitimeout  5
srvtimeout  5
retries 0
option  redispatch
maxconn 2000

listen ddos 127.0.0.1:80
modehttp
server  myserver X.X.X.X:80 send-proxy


# ./haproxy -v
HA-Proxy version 1.5-dev19 2013/06/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu


# ./haproxy -f /usr/local/etc/haproxy.conf -d
Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
0001:ddos.accept(0004)=0006 from [127.0.0.1:18493]
0001:ddos.clireq[0006:]: GET / HTTP/1.1
0001:ddos.clihdr[0006:]: User-Agent: curl/7.31.0
0001:ddos.clihdr[0006:]: Host: 127.0.0.1
0001:ddos.clihdr[0006:]: Accept: */*
0001:ddos.srvcls[0006:0007]
0001:ddos.clicls[0006:0007]
0001:ddos.closed[0006:0007]


# tcpdump
23:12:17.405476 IP HAPROXY_IP.32958  SERVER_IP.80: Flags [S], seq 3939228300, 
win 65535, options [mss 1460,nop,wscale 8,sackOK,TS val 21313989 ecr 0], length 0
23:12:17.405537 IP SERVER_IP.80  HAPROXY_IP.32958: Flags [S.], seq 763473061, 
ack 3939228301, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], 
length 0
23:12:17.405979 IP HAPROXY_IP.32958  SERVER_IP.80: Flags [R], seq 3939228301, 
win 0, length 0

Best Regards,

David BERARD

contact(at)davidberard.fr

*   No electrons were harmed in the transmission of this email *



smime.p7s
Description: S/MIME cryptographic signature


RE: send-proxy on FreeBSD

2013-09-02 Thread Lukas Tribus
Hi David,


 Since dev13 I can't get send-proxy to work on FreeBSD, connections to
 the backend server (another haproxy with accept-proxy bind option) are
 imediately closed.

 Version dev12 works correctly on FreeBSD, and dev19 on Linux works too.

Best thing would be if you could git bisect this, so we would know exactly
what of the 294 patches committed between dev12 and dev13 is causing this.

Also I think a strace'ing while reproducing the problem could help.



Regards,

Lukas 


Re: Haproxy + nginx + naxsi

2013-09-02 Thread Shannon Francis
 On Mon, Jun 10, 2013 at 6:15 PM, Hugues Lepesant hugues@... wrote:
  Hello all,
 
 
 
  I'm trying to make this tutorial work :
 
 
 
 
http://blog.exceliance.fr/2012/10/16/high-performance-waf-platform-with-naxsi-and-haproxy/
 
 
 
  But when I check the configuration of haproxy I've got a this errors :
 
 
 
  # haproxy -c -f /etc/haproxy/haproxy.test.cfg
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:32] :
  error detected while parsing ACL 'abuse' : ACL keyword 'sc1_http_req_rate'
  takes no argument.
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:33] :
  error detected while parsing ACL 'flag_abuser' : ACL keyword 'sc1_inc_gpc0'
  takes no argument.
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:34] :
  'tcp-request content reject' : error detected in frontend 'ft_waf' while
  parsing 'if' condition : no such ACL : 'abuse'
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:56] :
  error detected while parsing ACL 'abuse' : ACL keyword 'sc1_http_err_rate'
  takes no argument.
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:57] :
  error detected while parsing ACL 'flag_abuser' : ACL keyword 'sc1_inc_gpc0'
  takes no argument.
  [ALERT] 160/191308 (22091) : parsing [/etc/haproxy/haproxy.test.cfg:58] :
  'tcp-request content reject' : error detected in backend 'bk_waf' while
  parsing 'if' condition : no such ACL : 'abuse'
  [ALERT] 160/191308 (22091) : Error(s) found in configuration file :
  /etc/haproxy/haproxy.test.cfg
  [WARNING] 160/191308 (22091) : config : log format ignored for frontend
  'ft_waf' since it has no log address.
  [WARNING] 160/191308 (22091) : config : log format ignored for frontend
  'ft_web' since it has no log address.
  [ALERT] 160/191308 (22091) : Fatal errors found in configuration.

Hug,

It looks like these lines from that tutorial are causing some hang ups:

---
  acl abuse sc1_http_req_rate(ft_web) ge 100
  acl flag_abuser sc1_inc_gpc0(ft_web)
  . . . 
  acl abuse sc1_http_err_rate(ft_waf) ge 10
  acl flag_abuser sc1_inc_gpc0(ft_waf)
---

HAProxy is complaining because those fetch methods don't take arguments.
Also, from the tutorial it looks like neither of these two front-ends tracks
anything or has any stick-tables, so:

---
  acl abuse sc1_http_req_rate ge 100
  acl flag_abuser sc1_inc_gpc0
  . . . 
  acl abuse sc1_http_err_rate ge 10
  acl flag_abuser sc1_inc_gpc0
---

might make more sense.

Best of luck,
Shannon




Re: Issue with 1.5-dev19 and acl foo sc1_inc_gpc0 gt 0 in backend

2013-09-02 Thread Toni Mattila

Hi,

On 2.9.2013 23:00, Baptiste wrote:

Maybe you can use a dummy tracking backend which is pointed by all
your backends.
But it means the counters will be incremented whatever backend the
clients passed through (maybe it's not an issue).
And I'm not even sure it can work.


So am I misunderstanding how the original solution at 
http://blog.exceliance.fr/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/ 
is supposed to work?


Doesn't it do sc1_inc_gpc0 in backend so frontend can sc1_get_gpc0?

Or are those different counters?

Thanks,
Toni





Re: send-proxy on FreeBSD

2013-09-02 Thread Willy Tarreau
Hi David,

On Mon, Sep 02, 2013 at 11:44:14PM +0200, David BERARD wrote:
 Hi,
 
 I've an issue with send-proxy on HAProxy-1.5-dev19 running on FreeBSD.
  
 Since dev13 I can't get send-proxy to work on FreeBSD, connections to 
 the backend server (another haproxy with accept-proxy bind option) are 
 imediately closed.
 
 Version dev12 works correctly on FreeBSD, and dev19 on Linux works too.
 
 Connection seem to be closed in stream_interface.c (out_error) :
 
  470 if (ret  0) {
  471 if (errno == EAGAIN)
  472 goto out_wait;
  473 goto out_error;   
  474 }

Strangely this part has not changed between dev12 and dev13, but I suspect
it's a timing issue caused by other fixes (dev12 introduced the rework of
the connection management and was full of complex bugs).

It would be nice if you could add a perror(send_proxy) just before
goto out_error. I suspect you're getting ENOTCONN that is correctly
handled in raw_sock.c but not here.

Alternately, could you try the following change :

   471 -  if (errno == EAGAIN)
   471 +  if (errno == EAGAIN || errno == ENOTCONN)

Thanks,
Willy




Re: Issue with 1.5-dev19 and acl foo sc1_inc_gpc0 gt 0 in backend

2013-09-02 Thread Willy Tarreau
On Mon, Sep 02, 2013 at 09:27:26AM +0300, Toni Mattila wrote:
 Hi,
 
 On 2.9.2013 8:55, Willy Tarreau wrote:
   backend web29
   stick-table type ip size 50k expire 120m store 
   gpc0,http_req_rate(120s)
   tcp-request content track-sc2  src if METH_POST
   stick store-request srcif METH_POST
   acl bruteforce_detection  sc2_http_req_rate gt 5
   acl foo sc2_inc_gpc0 gt 0
   http-request deny if foo bruteforce_detection
   server web29 94.199.58.249:80 check
 I think that with the fix above it will work. BTW, you don't need
 the stick store-request statement, but I suspect you used it to
 debug the issue.
 
 This works on backend side.. but how do I get that sc2_get_gpc0 working 
 on frontend?

Then put it in the frontend.

 Idea is that I will have multiple backends but once one backend detects 
 certain IP being over the limit it would be blocked already on the frontend.

OK but I'm having a hard time understanding exactly what you want to do.

Consider sc0, sc1, sc2 as independant pointers to up to 3 table entries.
Once any of them is tracked, it is tracked till the end of the session
(or the request when using http). So whatever you track in the frontend
is obviously available in the backend. Then all counters that are stored
are available.

So if what you're trying to do is to count the rate of POST requests and
block source IP addresses, then I think you'll need two different pointers,
just because you want to count one request only in case of POST which
explains why you have a track ... if ...

So what I could suggest :

   - frontend : track/check source address
   - backend : track/count POST requests

   backend per-ip
  stick-table type ip size 50k expire 120m store gpc0

   frontend
  tcp-request connection track-sc1 src table per-ip
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  ...
  use_backend foo...
 
   backend foo
  stick-table type ip size 50k expire 120m store http_req_rate(120s)
  tcp-request content track-sc2 src if METH_POST
  acl bruteforce_detection  sc2_http_req_rate gt 5
  acl block sc1_inc_gpc0 gt 0
  http-request deny if bruteforce_detection block

You see, then the frontend enables tracking of the source address,
while the backend monitors the POST request rate for each backend
and flags the source address so that it can be checked in the frontend.

You could also decide that you use the same table for everything,
so that a source address sending many POST requests to different
sites will be detected as well :

   backend per-ip
  stick-table type ip size 50k expire 120m store gpc0,http_req_rate(120s)

   frontend
  tcp-request connection track-sc1 src table per-ip
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  ...
  use_backend foo...
 
   backend foo
  tcp-request content track-sc2 src table per-ip if METH_POST
  acl bruteforce_detection  sc2_http_req_rate gt 5
  acl block sc1_inc_gpc0 gt 0
  http-request deny if bruteforce_detection block

Hoping this helps,
Willy