Actually, there's a simpler way...

We calculate the rate at which requests have been sent for each node,
and calculate a median.

For each node, above a certain number of requests, to send another one
requires that the calculation of that node's sending rate after sending
it be less than say 200% of the median.

Thus, we will not send more than twice the median request rate to any
node.

On Wed, Sep 27, 2006 at 07:12:11PM +0100, toad wrote:
> The problem with this is that it will quite happily route almost all
> requests to an ubernode... Hence we need a second mechanism to ensure some
> level of fairness. One thing that was proposed was that we have separate
> token buckets for this purpose.
> 
> That goes something like this:
> 
> The below is incomplete. We need to check whether there is a request to
> be sent not only when we get a token, but also when we get a request.
> 
> We calculate the median node speed. Any node which is faster than this,
> we limit to the median speed. Any node slower than it gets full speed.
> 
> How to implement? A token bucket. No matter how many tokens the node has
> actually granted us, we don't let a request be forwarded unless it has a
> balancing token as well. Each node has a bucket of balancing tokens
> (this is just a counter), the same size as its outgoing requests bucket.
> We add tokens to these buckets, considered as a whole, at the same rate
> as tokens come in overall. But we add them in proportion to the desired
> speed distribution above. So we use floating point counters, and add a
> fraction to each counter when a request token comes in, perhaps...
> 
> Problem with the below: How many tokens do we start with? If not full,
> then the one picked will be filled by the add-to-fullest-non-full-bucket
> algorithm. But if full, won't they all be able to send as many requests
> as they want to? No, it depends on how many are in flight... How many
> should be in flight? No idea. Seems we may need some simulations after
> all. Worthwhile specifying this as much as possible though, will make a
> start on the wiki after specing opennet.
> 
> Another possibility would be to simply impose an arbitrary limit; we
> won't forward a request to a node unless it's in the top 2 or 3 choices
> for it.
> 
> On Wed, Sep 27, 2006 at 05:01:40PM +0100, toad wrote:
> > TOKEN PASSING LOAD LIMITING
> > ---------------------------
> > 
> > Each node has a maximum limit N on the number of running requests.
> > 
> > Each node has a bucket for each of its peers, containing tokens. A token
> > is simply a UID; a counter which is incremented each time we create a
> > new token. The token is effectively a permit to send a request. A
> > request will include a token. If the token isn't recognized by the node
> > it sends back a specialized message indicating no such token. Otherwise
> > the node is required to accept it unless there is a serious problem
> > (such as looped requests). It then removes the token in question from
> > that node's allocation. Using UIDs for tokens prevents any race issues
> > that might otherwise happen.
> > 
> > When nodes are first added, they are given some tokens. Tit-for-tat can
> > be implemented later on by scaling the maximum size of the token bucket
> > and biasing the decision on whom to give tokens to. When a request
> > completes, a new token is created an allocated to a node. There are two
> > obvious strategies for adding tokens:
> > - Reward requestors by adding to the least full bucket.
> > - Reward non-requestors by adding to the most full non-full bucket.
> > I prefer the latter option (it does impose some limits though; there
> > need to be enough tokens initially that a node not using its tokens
> > doesn't prevent everyone from requesting). Obviously we would use a
> > random tiebreaker in the case that two nodes have the same number of
> > tokens.
> > 
> > What else remains? How do we integrate this with routing, that is the
> > remaining question. All we have to do here is keep requests in some sort
> > of pending state, and when we get a token from a node, send the request
> > most specialized for that node. This means we will have some misrouting,
> > but not much, and nodes which send us few tokens will get only the
> > requests closest to their specializations. We can perhaps improve on
> > this by "simulated queueing", but I don't see why the basic mechanism
> > shouldn't work, albeit at a slight latency cost.
> > 
> > I'm rather inclined to implement this without waiting for simulations.
> > Any objections?
> 
> 
> 
> > _______________________________________________
> > Devl mailing list
> > Devl at freenetproject.org
> > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl



> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20060927/eb73fc47/attachment.pgp>

Reply via email to