Hello Fabien,

On Tue, Jan 26, 2010 at 12:12:25AM +0100, Fabien Germain wrote:
> Hello,
> 
> I have a web hosting cluster, and I would like to rate limit by vhost
> (i.e. no more than 50 connections per second on www.domain1.com, for
> example). I found a way  to do so, and I'd like to get your feeling
> about it :
> 
> 1) create a frontend to get http connections, create ACLs to filter
> domain by domain, and redirect each domain (ACL) to a specific backend
> :
> 
>   frontend http-in
>         bind       :80
>         mode       http
>         ...
>         acl dom1 hdr_end(host) -i .domain1.com
>         acl dom2 hdr_end(host) -i .domain2.com
>         ...
>         acl domN hdr_end(host) -i .domainN.com
> 
>         use_backend b_dom1 if dom1
>         use_backend b_dom2 if dom2
>         ...
>         use_backend b_domN if domN
> 
> 2) Create all backends (one per domain), and create a filter with
> be_sess_rate to redirect to drop excessive requests away :
> 
>   backend b_dom1
>         mode    http
>         balance roundrobin
> 
>         acl too_much_requests be_sess_rate gt 50
>         redirect location http://192.168.56.103/tryagainlater.html if
> too_much_requests
> 
>         server web1 192.168.56.102:80 check
>         server web2 192.168.56.103:80 check
>         server web3 192.168.56.104:80 check
>         server web4 192.168.56.105:80 check
> 
> 
> => It works, and that's pretty cool. But I have two questions :

Indeed, it's usually the way to do it. You even improve by putting
the checks only in one backend and setting the other ones to track
the first one.

> * Is there a cleverer way to do it ? I mean : if I have 2000 domains
> hosted on the cluster, it means 2000 ACL and 2000 backend sections :
> Not really easy to maintain... Do I have a generic way to handle
> domains ?

Right now, there is not. However, I have started to work on a draft
for generic QoS per any criterion relying on the new stickiness code.
That way you could create many classes of limitations and decide whether
a request is subject to that limitation. You could then have limits per
backend, per frontend, per server, and per source IP, header, etc...

I think I will start doing that for 1.5, as 1.4 is getting very close
to a release, I don't want to break anything now.

> * Is that method really usable for so much domains ? I guess that this
> kind of ACLs will need a *lot* of CPU to handle several hundreds of
> requests by second for the 2000 hosted domains (and also a lot of
> memory ?).

Indeed it's becoming huge. I think that just hundreds of requests per
second will not be a problem, but it seems to be complicated to manage
and a bit overkill.

Regards,
Willy


Reply via email to