Repeating the bottom line at the top:

NGR tries to make predictions of routing time.  For it to
succeed, routing time needs to be predictable.  So server
nodes need to try to be consistent:  if they succeeded in
the past for a particular client, let them try to succeed
again.  If they failed in the past, let them fail again.

On Fri, 2003-11-21 at 04:17, Ian Clarke wrote:
> Edward J. Huff wrote:
> > On Wed, 2003-11-19 at 17:21, Ian Clarke wrote:
> >>Toad wrote:
> >>>Is there any reason not to keep the backoff data when a node is dropped
> >>>from the routing table?
> >>Provided we delete it sometime, probably not.
> > It seems to me that there is some reasonable maximum backoff period.
> ...snip suggestion...
> 
> We need to be wary of feeping creaturism in the load-balancing code, if 
> it is going to get too complicated we may as-well just implement one of 
> the more sophisticated but non-alchemaic proposals.
> 

Well, I was just trying to establish the existence of a max backoff.
You might not need to actually calculate it.

However, looking at my payload ratio (10% or less) that number might
actually be quite large.  But it is still directly proportional to
N, the number of nodes which make requests on me.  The problem is,
N can get too big.  Exponential backoff is a way of keeping N
smaller, but we can't let N go to zero.

So I think I am now a fan of exponential backoff.  On the original
question, "server" nodes which have repeatedly QR'ed (for load
balancing reasons, not for other reasons) need to stay backed off,
even if they get evicted from the RT.  They have been randomly
chosen for exclusion from consideration for routing.

The question now is "How does a server ensure that not all client 
nodes back off on it?"  It should favor serving clients which have 
been successfully served before.  If the server keeps an estimate
of each client's estimator of the server's performance, it should 
especially favor serving queries which match the specialization 
already present in that estimator.

NGR tries to make predictions of routing time.  For it to
succeed, routing time needs to be predictable.  So server
nodes need to try to be consistent:  if they succeeded in
the past for a particular client, let them try to succeed
again.  If they failed in the past, let them fail again.

-- Ed Huff

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to