On 12/18/12 23:24, Paul Graydon wrote:
[snip]
It seems to me the next most likely solution is to try to combine either
one with dedicated load-balancing software like haproxy or pound, so
that the traffic would go [internet]->[apache/nginx
WAF]->[haproxy/pound]->[web servers]; but part of me really dislikes the
fact that's adding two potentially significant failure points on each
load-balancer instead of one. Maybe I'm worrying too much there though.

I'd love to hear some recommendations of software if people have them
that might fulfill either role (or in a dream world wrap both up in one
and do a good job?), and if you've any experiences (positive or
negative) about them.

The load balancer stack that I have been messing with has four layers. :)

1) LVS with keepalived to balance across multiple proxy instances.

Each proxy instance has nginx -> varnish -> haproxy.

2) nginx does SSL offload + HTTP header mangling (mainly
   x-forwarded-for)
3) varnish does VERY configurable caching
4) haproxy does app server load balancing

haproxy is also configured to respond to a special HTTP request, directly. keepalived queries through nginx and varnish to get the response from haproxy. That tests the health of the proxy instance.

nginx is configured to spawn a process per CPU, as it is the most CPU intensive with its SSL offload.

You can spin up multiple proxy instances to handle the SSL load.

I am looking into hacking varnish to use memcached, instead of its internal memory pool. This is to take advantage of the multiple proxy instances.

There is some system tuning, to get high transaction rates. Mainly TIME_WAIT issues.

--
Mr. Flibble
King of the Potato People
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/

Reply via email to