RE: SSL best option for new deployments

2011-12-13 Thread David Prothero
I've been using stunnel with the X-Forwarded-For patch. Is stud preferable to 
stunnel for some reason?

David


-Original Message-
From: Brane F. Gračnar [mailto:brane.grac...@tsmedia.si] 
Sent: Tuesday, December 13, 2011 1:36 PM
To: John Lauro
Cc: haproxy@formilux.org
Subject: Re: SSL best option for new deployments

On 12/13/2011 09:02 PM, John Lauro wrote:
 Been using haproxy for some time…  but have not used it with SSL yet.
 
 I do need to preserve the IP address of the original client.  So 
 either transparent (is that possible when going through stunnel or 
 other and haproxy on the same box), or X-Forwarded-for or similar added.

You should probably put stud (https://github.com/bumptech/stud) in front of 
haproxy. It supports sendproxy protocol from haproxy 1.5, supports
ipv6 and scaling out.

There's also a patch for sendproxy protocol that pplies to haproxy 1.4.
However, you shouldn't be afraid of running haproxy 1.5-devXX, it is really, 
really very stable.

Best regards, Brane




RE: HAProxy performance issues

2011-11-15 Thread David Prothero
Thanks John. We did re-run our tests with this option enabled, but it had no 
effect. Curl was keeping the connection alive before, just all the way to the 
web server. This option just changed the test so it was only staying connected 
to haproxy. We definitely like this option better as we get our custom headers 
from haproxy for every request now, but unfortunately it didn't make the 
performance difference go away.

David


-Original Message-
From: John Marrett [mailto:jo...@zioncluster.ca] 
Sent: Saturday, November 12, 2011 6:24 AM
To: David Prothero
Cc: haproxy@formilux.org
Subject: Re: HAProxy performance issues

David,

I do not believe that your configuration correctly implements all of the 
options required for keep-alive. I suspect that your clients are forced to 
initiate a new connection for each page element. On an SSL connection this will 
have an even more substantial impact.

You can take a look at this blog posting, as well as the mailing list archives

http://blog.killtheradio.net/technology/haproxys-keep-alive-functionality-and-how-it-can-speed-up-your-site/

I believe you are missing at least:

option http-server-close

While the page refers to option httpclose my reading of the configuration guide 
suggests that this option may close client facing connections as well.

You will not have keepalive between haproxy and the server, which will impact 
performance if there is substantial latency between haproxy and the backend 
servers. You should use network captures to ensure that proper keepalive is 
maintained between the client and the haproxy machine.

   timeout client 50s

This is also an extremely long timeout value, ordinarily you will only want to 
serve the elements for a given page in a single keep-alive session. I would 
suggest perhaps 5 seconds.

-JohnF





Re: HAProxy performance issues

2011-11-12 Thread David Prothero
Thanks for that tip. I will keep an eye out for that when we begin our SSL 
performance testing. Currently, however, the delay is with regular http 
connections directly to haproxy.

David

Wout Mertens wout.mert...@gmail.com wrote:

On Nov 11, 2011, at 17:43 , David Prothero wrote:

 The local test showed a very small (and more than acceptable) overhead of 
 7ms for the entire page load (all 29 requests) when going through HAProxy. 
 However, tests from longer distances over various IP’s showed an overhead 
 that seemed to be proportional to the amount of latency in the connection. 
 Typical overhead times we are seeing from various locations (both from 
 enterprise and consumer grade connections) are around 200-400ms.
 

Delay values of multiples of 200ms are due to the Nagle algorithm. Try adding

socket=l:TCP_NODELAY=1
socket=r:TCP_NODELAY=1

to your stunnel configuration.

Wout.

HAProxy performance issues

2011-11-11 Thread David Prothero
HAProxy version 1.4.18

stunnel 4.44 with X-Forwarded-For patch

Ubuntu 10.04.3 LTS

Web servers running IIS 7 on Windows Server 2008

 

We have been doing some performance testing. We do a typical page load
using curl and a list of 29 URL's (an html file along with associated
scripts, css, images, etc.). We run this 200 times to get a good data
sample and try to smooth out any variances. We run one test pass against
the IIS servers directly and then another pass against HAProxy in front
of the same IIS servers.

 

We have run this test against a configuration setup in our own private
cloud, hosted in an enterprise-grade facility and we also ran it against
an HAProxy/IIS configuration setup in Amazon EC2. In both scenarios, we
ran the tests from multiple locations, over multiple ISP's. We also
always ran one test that was local to the servers.

 

The local test showed a very small (and more than acceptable) overhead
of 7ms for the entire page load (all 29 requests) when going through
HAProxy. However, tests from longer distances over various IP's showed
an overhead that seemed to be proportional to the amount of latency in
the connection. Typical overhead times we are seeing from various
locations (both from enterprise and consumer grade connections) are
around 200-400ms.

 

When the test is run locally, we see a 7ms increase in page load times.
We expect that is the native overhead of proxying the requests in our
configuration. What doesn't make sense, is that the overhead seems to
increase when run over a wan. Since the 7ms is only added to the end of
the pipe, it seems like it should always be roughly 7ms, even if the
rest of the time is increased by a higher latency connection.

 

We have run the tests many, many times and have been getting consistent
results. HAProxy is always slower than direct. Not unexpected, but the
proportionality of the overhead to connection latency is unexepected. We
would expect the overhead attributable to HAProxy to be a static number.

 

Anyone have any thoughts? Is our expectation of static overhead not
warranted (we are not network engineers)? Or could there be some other
factors at play? I've pasted our haproxy.conf below. Thanks in advance
for any thoughts.

 

NOTE: I only mention stunnel in my config at the top so aspects of the
config below will make sense. However, all tests are via regular HTTP,
no encryption, so stunnel is not a factor at all in these tests.

 

global

  daemon

  maxconn 16384

  user nobody

  chroot /usr/local/etc/haproxy/

  pidfile /usr/local/etc/haproxy/haproxy.pid

  stats socket /tmp/haproxy

 

defaults

  mode http

  option redispatch

  timeout connect 5s

  timeout client 50s

  timeout server 50s

  timeout check 5s

  balance roundrobin

  option forwardfor except 127.0.0.1

  errorfile 503 /usr/local/etc/haproxy/503.http

 

frontend http-in

  bind :80,:8443

  default_backend servers

  acl from_stunnel dst_port eq 8443

  reqadd X-TRC-SSL:\ Yes if from_stunnel

  reqadd X-From-HAProxy:\ Yes

 

backend servers

  option httpchk HEAD /default.asp HTTP/1.0

  option log-health-checks

  server SMFWEB001 10.129.32.50:80 maxconn 8192 check port 80 inter 2000

  server SMFWEB002 10.129.32.51:80 maxconn 8192 check port 80 inter 2000

 

listen stats :1936

mode http

stats enable

stats uri /

 

---

David Prothero

I.T. Director

Pharmacist's Letter / Prescriber's Letter

Natural Medicines Comprehensive Database

Ident-A-Drug / www.therapeuticresearch.com

 

(209) 472-2240 x231

(209) 472-2249 (fax)

 



RE: SSL Pass through and sticky session

2011-11-07 Thread David Prothero
Yup. I accomplish what you're describing by using HTTP mode along with
stunnel for the SSL.

 

David

 

From: Vivek Malik [mailto:vivek.ma...@gmail.com] 
Sent: Monday, November 07, 2011 11:10 AM
To: Mir Islam
Cc: haproxy@formilux.org
Subject: Re: SSL Pass through and sticky session

 

You are running haproxy in a tcp mode since you are relaying SSL and
decrypting on the backend. Cookies can only be analyzed in HTTP mode.
Not sure how to do sticky sessions in tcp mode.

 

Vivek

On Mon, Nov 7, 2011 at 2:03 PM, Mir Islam mis...@mirislam.com wrote:

Is it possible to utilize some sort of sticky session for incoming
requests? SSL connections are terminated at the servers in the backend.
Right now I can do source IP based balance. But then users behind a
firewall/NAT will not get load balanced correctly. Instead, they all end
up on same server. That is my main problem.


Here is a portion of my config. I added the cookie param but I guess it
will work with http only. Anyway, any help/pointer is appreciated.



listen  ssl-relay 0.0.0.0:443
   option  ssl-hello-chk
   balance source
   server  inst1 10.254.2.145:443 check inter 2000 fall 3
   server  inst2 10.46.19.211:443 check inter 2000 fall 3

   option  httpclose   # disable keep-alive
   option  checkcache  # block response if set-cookie 
cacheable

   cookie HASERVERID inser

 



RE: haproxy and multi location failover

2011-11-03 Thread David Prothero
We use www.dnsmadeeasy.com (unsolicited plug) to do automatic DNS failover that 
Joris is describing. It works well for us.

My colleague and I theorized another option would be to run your HAProxy 
instances as Amazon EC2 instances (one each in different availability zones) 
with an elastic IP. That way you'd be taking advantage of Amazon's routing 
network without having to build your own. Like I said, that's only been 
theorized. I haven't actually done that.

---
David Prothero
I.T. Director
Pharmacist's Letter / Prescriber's Letter
Natural Medicines Comprehensive Database
Ident-A-Drug / www.therapeuticresearch.com

(209) 472-2240 x231
(209) 472-2249 (fax)


-Original Message-
From: joris dedieu [mailto:joris.ded...@gmail.com] 
Sent: Thursday, November 03, 2011 1:19 AM
To: haproxy@formilux.org
Subject: Re: haproxy and multi location failover

2011/11/1 Senthil Naidu senthil.na...@gmail.com:
 hi,

 we need to have a setup as follows



 site 1 site 2

   LB  (ip 1)   LB (ip 2)
    |   |
    |   |
  srv1  srv2  srv1 srv2

 site 1 is primary and site 2 is backup in case of site 1  LB's failure 
 or failure of all the servers in site1 the website should work from 
 backup location servers.

Unless you have your own routing, if you want no downtime for nobody you have 
to imagine a more complex scenario. Has said below the only way to switch for a 
datacenter to an other is to use dns.

So you have to find a solution for waiting dns propagation to be complete.

I'll do something like :
1) if lb1 fail
- change dns
- srv1-1 become a lb for himself and srv2-1

2) if srv1-1 and srv2-1 fail
- change dns
- ld1 forward requests for lb2 (maybe slow but better than nothing).

and so one ...

Joris

 Regards

 On Tue, Nov 1, 2011 at 10:31 PM, Gene J gh5...@gmail.com wrote:

 Please provide more detail about what you are hosting and what you 
 want to achieve with multiple sites.

 -Eugene

 On Nov 1, 2011, at 9:58, Senthil Naidu senthil.na...@gmail.com wrote:

 Hi,

 thanks for the reply,  if the same needs to be done with dns do we 
 need any external dns services our we can use our own ns1 and ns2 for the 
 same.

 Regards


 On Tue, Nov 1, 2011 at 9:06 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 Do you want to failover the Frontend or the Backend?
 If this is the frontend, you can do it through DNS or RHI (but you 
 need your own AS).
 If this is the backend, you have nothing to do: adding your servers 
 in the conf in a separated backend, using some ACL to take failover 
 decision and you're done.

 cheers


 On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu 
 senthil.na...@gmail.com
 wrote:
  Hi,
 
  Is it possible to use haproxy in a active/passive failover 
  scenario between multiple datacenters.
 
  Regards