On Wed, Feb 13, 2002 at 10:50:25AM -0500, Tavin Cole wrote:
> Oskar's idea to use QueryRejecteds for load regulation at the
> application level was the first step.  It introduced some negative
> feedback.  Now we need to apply that feedback to the routing logic.
> Right now all we do is hope that hawk will drop out of the routing table
> as we accumulate references to other nodes.  This is just too slow.  The
> options I can see are:
> 
> 1. factor QueryRejecteds into the CP (ugly, mixes layers)
> 2. introduce an application-layer probabilistic factor like CP
>    (might as well just do #1)
> 3. only send N requests at a time, where N is some small integer,
>    waiting for the Accepteds before sending more.  bail out on all
>    queued requests if we get a QueryRejected instead of Accepted.
>    (arbitrary and bad for performance)
> 4. reintroduce ref deletion.  when we receive a QueryRejected, delete
>    the ref with a probability of 1 - 1/(no. of refs to that node).
>    the probability is to prevent removing the last ref to that node.
> 
> I am favoring #4.  I think we should use it for timeouts as well as
> rejected requests.

Also, we need an approach that works well with the way excess nodes are
purged from the RT.  The current method is to drop the least recently
contacted nodes.  Without ref deletion or some way to stop the
attempts to the 100%-rejecting node, it could stay in the RT
indefinitely.

The other possible method for purging nodes is by least number of refs
(although that has problems with deleting nodes that were just added).
With this method as well, ref deletion would probably be needed for things
to work properly.  Otherwise a 100%-rejecting node that had accumulated
a large number of refs could persist for a long time.

-tc

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 240 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20020213/c7a3dc61/attachment.pgp>

Reply via email to