Martin Stone Davis wrote:

Toad wrote:

Currently the situation, even with the recently integrated probabilistic
rejection, is as follows: We start off with no load
We accept some queries
Eventually we use up our outbound bandwidth, and due to either
messageSendTimeRequest or the output bandwidth limit, we reject queries
until our currently transferring requests have been fulfilled.
With our current running average code, at this point the node's
pSearchFailed estimate will go through the floor, and it won't recover
because it won't be routed to.
Possible solutions proposed:
1. Try the nodes again after some fixed, perhaps increasing, backoff,
once we are into QR mode. One way to do this is to abuse the
pSearchFailed estimator as edt has suggested; another way would be to
randomly fork requests occasionally such that each node in the RT is
visited at least every N seconds as long as the node has some load.
The search failed estimator will recover quite fast if it gets retried
and is not queryrejecting.

Actually, perhaps we could easily adjust this solution to meet with my goal of reducing the number of queries made: Any time the requestee notices that the requester is not backing off, it punishes the requester by QR:ing all of its requests for a limited period of time.


2. Use a really long term average.

This won't work since then the prediction will still hardly ever match reality. The reality is that the node is usually either 100% QR or 100% QA. The goals are to get requesters to predict *better* as well as reducing the number of queries to match capacity.


3. Have the node somehow guess when it will next be available for
queries, and tell the requesting node, which then uses that as a backoff
time. Somebody suggested this too essentially. You could perhaps
guesstimate it from the transfer rate... but sadly the transfer rate
will vary over time..

Any other suggestions? Any detail as to why/how a particular option would work?


<snipped: my expansion of option 3>
-Martin

-Martin



_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to