Proposal:

We keep a separate AIMD throttle for each peer. When we get a local
RejectedOverload, or a timeout, from a peer, we reduce the rate of sending
requests to that peer; when we don't, we increase it.

We use fair queueing (or fair simulated queueing) to propagate load back
to its originator. By fair I mean:
- We compute the overall request quota from the above.
- If there are n peers, and q quota, then each peer has a guaranteed q/n
  requests. These can be sent regardless of what the other peers do.
- Some nodes will send fewer requests than their guaranteed share of the
  quota, so after the basic quota shares have been allocated, some global
  quota will remain. If all remaining requests above the basic quota fit
  into this, we let all requests through.
- If not, we allocate the requests to the node which is the least above
  its quota, followed by the one with next fewest extra requests...
  until we run out of global quota.
- Any request which is not forwarded is rejected (with the local flag
  ON).
- Thus, the node which is sending the most requests is most severely
  punished. The result of this is that load is propagated back to the
  source.

   F     G
   |     |
A--B--C--D--E

A is spamming requests. This causes E to get overloaded (on CPU).

This causes D to reduce its send quota to E. Most of its requests for E
are coming from C, so it allows through all (most?) of G's requests, and
rejects many of C's requests.

C reduces its request send quota to D in response to the rejections. All
its requests are coming from B, so it passes on the overload indication
by rejecting many of B's requests.

B likewise sees many RejectedOverload's from C, so it slows down its
send rate to C. Since most of its requests are coming from A, it rejects
many of A's requests (but either none or a very small fraction of F's).

Thus, load is propagated back to A, with minimal effect on F and G.


I believe we can implement the above with either actual queueing or
simulated queueing. The latter would obviously avoid a latency cost.


Nodes which always reject will not bring down the network because they
only affect their part of the overall quota, and their
RejectedOverload's are not propagated (or if they are, they are
propagated with local=false, which given the abolition of client-end
request throttling amounts to the same thing). Timeouts do propagate
(necessarily because of our HTL system), which could be a problem; we
can either backoff on a timeout, or make the RTT include timeouts (even
though it doesn't include local RejectedOverload's).
--
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to