On Monday 03 December 2007 17:44, Michael Rogers wrote:
> Matthew Toseland wrote:
> > I'm not sure that that is the main problem we have to deal with right now. 
My 
> > suspicion is that pre-emptive rejection is fine, but we're not being sent 
> > enough requests in the first place.
> 
> Cool, I see what you mean now - I'd been thinking in terms of replacing
> both backoff and pre-emptive rejection but I guess it makes sense to
> replace backoff first and see if that solves the problem.

Well, the initial idea was for pre-emptive rejection to be *mostly* replaced 
by a simple limit on the number of running requests (including those queued 
awaiting a node to be routed to). I would probably add this at least to begin 
with. However, some other forms of pre-emptive rejection also lend themselves 
to easily determining that we can accept a few more requests now e.g. 
bandwidth liability can do this calculation trivially.

As you can see there are some details to work out.

Lets consider the current pre-emptive rejection code. We currently reject if:
- We are over the thread count.
We can guesstimate how many threads will be used by a request, and therefore 
determine how many requests we can accept.
- The ping time is too high.
Not obvious how we could adapt this, but we can keep it.
- The bandwidth-limited-packets average delay is too high.
Not obvious how we could adapt this, but we can keep it. This is largely 
obsoleted by bandwidth liability limiting on most nodes anyway. We could even 
get rid of it.
- Input or output bandwidth liability limits would be exceeded.
We only accept any request if we have enough spare bandwidth for one of each 
type. This is a measure against accidentally favouring specific request 
types. We can either multiply this, to get a low figure, add an average value 
based on the average request, to get an average figure, or assume past the 
first they are all SSK requests, to get a high figure. One of these options 
should be workable, if not, just go one at a time.
- The high level token buckets don't have enough space for the expected bytes 
transferred.
Trivial to adapt, subject to the same worries about request types as above. 
Again, there may be an argument for getting rid of this.
- There isn't enough memory left.
Difficult to estimate a number, but not necessary most of the time. Hard 
resource limits are one obvious reason why we might reject a request even in 
a "traditional" token passing scheme.

Note that while being able to estimate a number greater than one is useful, 
it's not essential in the new scheme: we expect to send time-limited request 
solicitations to many of our peers anyway.

What happens if we have plenty of capacity? Well, we send out tokens to all 
our peers with a largish number, and when they make a request, we send 
another RequestARequest in response. This costs nothing (after packet 
padding), as we will have to send an Accepted or Reject* in response anyway. 
But what if there isn't much demand? What if the time-limited tokens expire 
before the node wants to send a request? Well, we don't specify any specific 
timeout, we simply reject any requests it sends us if we don't have the 
capacity to take them. If a node gets a RequestARequest, it updates its token 
count. If we reject a request due to overload, the token count is reset to 
zero, and the node won't send us any more requests until we solicit some 
explicitly by another RequestARequest.

Another option would be to issue tokens in terms of bytes rather than 
requests, but this would likely cause a preference for SSKs, so is probably a 
bad idea.

Does this make sense?


However, there is no urgency as the network appears to be behaving at the 
moment. It would be worth looking into after opennet is fully working 
(opennet will be fully implemented soon).
> 
> Cheers,
> Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20071203/06b5139b/attachment.pgp>

Reply via email to