On Apr 28, 2008, at 8:38 AM, Matthew Toseland wrote:

> Load management proposal:
>
> When we receive a request, we stick it in a queue. The queue is  
> limited in
> length, and limited in queue time (probably 500-1000ms). If a  
> request is
> still on the queue at the end of the timeout, or if there are too many
> requests on the queue, we reject it.

You mention the latency cost as 'slight', but adding one second for  
every node which rejects a request will add a lot of latency. Rather  
than replacing instant rejection outright, perhaps we can still reject  
some requests instantly; such as only allowing so many active/pending  
requests per-peer.

> We have two options afaics:
> 1. We start the remote request closest to our location. We keep the  
> separate
> request starter for local requests.

What a fascinating effect this might have on load balancing.  
Effectively preferring 'local' traffic. Although I am not sure of the  
need, etc it resonates with the structure of small world networks  
(most links are short links --?--> most requests are near requests).  
Is the intent to make distant requests make fewer (but larger) 'jumps'  
across the network?

--
Robert Hailey


Reply via email to