Hi John,

On Sat, Mar 07, 2009 at 10:19:15AM -0800, John L. Singleton wrote:
> Hi Willy,
> 
> What a great new feature! Being able to limit connections/second is  
> something we've probably all needed for a long time. Does this system  
> use the failover "down" message if the client has to wait too long  
> before having their request served?

I don't know what "down" message you're talking about, I'm sorry. If
a client tries to connect during the limitation, his connection will
remain in the system's backlog as long as haproxy does not accept it.
So haproxy will not even be aware that there is someone knocking at
the door.

> As I was thinking about the advantages of this, it occurred to me that  
> there could be another useful scenario. Let's say that we have a  
> powerful cluster of servers which can pretty much handle whatever  
> traffic (within normal peaks and so on) is thrown at it. In this case,  
> you really wouldn't want to put a request limiter on it. What I  
> *would* want to do is still have some protection against attacks,  
> primarily by restricting requests/second on an IP/URL basis. We can't  
> just restrict based on IP address as people behind NATs could quickly  
> get wrongfully limited. But if we took the combination of IP and URL  
> and said "each backend can only receive x requests per seconds per IP  
> and URL combination" we could also handle this other case.

hehe, that's what everyone wants :-)

I'm planning to add per-IP limitation first, then later add other combinations,
resulting in traffic classes in which we could group IPs, cookies, etc... The
major required change here is to make haproxy learn IP addresses and keep them
in memory for some time. These are about the same requirements as for handling
persistence.

I'm not sure I'll implement per-url limiting though as this might be
particularly complex for little added benefit (eg: will not protect against
URL scans). However, limiting by application session would make a lot of
sense.

Regards,
Willy


Reply via email to