Load management proposal:

When we receive a request, we stick it in a queue. The queue is limited in 
length, and limited in queue time (probably 500-1000ms). If a request is 
still on the queue at the end of the timeout, or if there are too many 
requests on the queue, we reject it.

We have a thread which wakes up every so often, and decides whether we can 
send a request. This may be a little similar to RequestStarter, but it would 
handle all types of request. It will maintain a balance between SSKs and 
CHKs, that is to say, it will use the current criterion (we only start a 
request for an X if we also have enough bandwidth to start one of each other 
type). If we can run a request, it chooses a request to run, removes it from 
the relevant queue, and starts it.

We have two options afaics:
1. We start the remote request closest to our location. We keep the separate 
request starter for local requests.
2. We choose a location through some probability distribution, and start the 
request closest to that point. Or we rank the requests in order of distance 
from our location, and choose one according to some distribution. We could 
use this to start local requests too. We'd need to maintain a short queue for 
them, chosen from the wider queue structure; this would be treated just as 
any queue for any other peer.

The second phase would be to extend this to something resembling token 
passing, by telling nodes how many requests they can send, whenever we send 
them an Accepted or RejectedOverload, when the maximum queue length changes, 
and on connect. Thus nodes wouldn't send us requests when we don't want them. 
This should improve bandwidth efficiency, especially for nodes with low 
bandwidth limits.

We could then move to full token passing by not choosing the requests globally 
based on *our* location, but whenever we have the ability to send a request 
to a node, and we are willing to send a request (w.r.t. bandwidth limiting 
etc), matching a request to it. The external protocol would remain the same.

Advantages:

- Not as drastic as full token passing. Externally, the only difference is 
that it may take a second or so before a request is accepted (or rejected).
- For option 2, it makes it harder to identify local requests through a timing 
attack. And afaics impossible once we reach full token passing.
- Reinforce specialisation, killing off paths that are too far away from where 
they should be.
- Slower nodes will specialise more sharply.
- Once we have phase 2, significantly better bandwidth usage.

Disadvantages:

- Slight latency cost.
- Need to simulate it because of location-related acceptance.
- No direct load propagation until full token passing, so we need the 
requestor-side AIMDs until then.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080428/dc5cdb22/attachment.pgp>

Reply via email to