On Tue, Apr 25, 2006 at 01:12:00AM +0300, Jusa Saari wrote:
> Is this actually load balancing ? It sounds more like load limiting.

Load limiting is FAR more important than load balancing, thank you for
correcting my errant terminology. However there is *some* need for load
balancing, to the extent that *really* slow nodes should not pull the
whole network down.
> 
> Is there ever a situation where a message is forwarded to any but the
> optimal (from purely routing perspective, without any account for load)
> node ?

There shouldn't be except for the exclusion via backoff of REALLY slow
nodes. Misrouting to avoid load is _very_ bad because it generates more
load!
> 
> I'm asking because I just came to think about a really nasty interaction
> between load balancing and routing - basically, if a node is "better"
> somehow than other nodes, it gets more requests than them, and this leads
> to load balancing into trying to push traffick away form it. Basically,
> routing and load balancing are going to undo each other in a reasonably
> loaded network - in a lightly loaded one, routing will dominate, in a
> heavily loaded one, load balancing will.

Hmm. We don't prefer "better" nodes, we punish seriously inferior nodes.
A "better" node in the above sense would have more capacity; the only
reason we would divert more requests to a node would be if it has a big
part of the keyspace because of a routing glitch.
> 
> So, dropping requests that cannot be delivered to a good node right now is
> a good thing, forwarding them to some other node is a bad thing and will
> end up making the network as hopelessly overloaded (since most messages
> will end up DNFing, they go to HTL nodes) and consequently badly working
> as the previous attempts.

Yes. Misrouting is thoroughly, totally evil! Routing ought to balance
load already, there are two challenges:
- Identifying and avoiding nodes so slow that using them will slow down
  the whole network. Or ideally giving them a proportionately tiny part
  of the keyspace.
- Propagating load back to the source so that it is quenched at source,
  and so that flooding is not a viable DoS.
> 
> On Fri, 21 Apr 2006 21:59:59 +0100, Matthew Toseland wrote:
> 
> > Proposal:
> > 
> > We keep a separate AIMD throttle for each peer. When we get a local
> > RejectedOverload, or a timeout, from a peer, we reduce the rate of sending
> > requests to that peer; when we don't, we increase it.
> > 
> > We use fair queueing (or fair simulated queueing) to propagate load back
> > to its originator. By fair I mean:
> > - We compute the overall request quota from the above. - If there are n
> > peers, and q quota, then each peer has a guaranteed q/n
> >   requests. These can be sent regardless of what the other peers do.
> > - Some nodes will send fewer requests than their guaranteed share of the
> >   quota, so after the basic quota shares have been allocated, some global
> >   quota will remain. If all remaining requests above the basic quota fit
> >   into this, we let all requests through.
> > - If not, we allocate the requests to the node which is the least above
> >   its quota, followed by the one with next fewest extra requests... until
> >   we run out of global quota.
> > - Any request which is not forwarded is rejected (with the local flag
> >   ON).
> > - Thus, the node which is sending the most requests is most severely
> >   punished. The result of this is that load is propagated back to the
> >   source.
> > 
> >    F     G
> >    |     |
> > A--B--C--D--E
> > 
> > A is spamming requests. This causes E to get overloaded (on CPU).
> > 
> > This causes D to reduce its send quota to E. Most of its requests for E
> > are coming from C, so it allows through all (most?) of G's requests, and
> > rejects many of C's requests.
> > 
> > C reduces its request send quota to D in response to the rejections. All
> > its requests are coming from B, so it passes on the overload indication by
> > rejecting many of B's requests.
> > 
> > B likewise sees many RejectedOverload's from C, so it slows down its send
> > rate to C. Since most of its requests are coming from A, it rejects many
> > of A's requests (but either none or a very small fraction of F's).
> > 
> > Thus, load is propagated back to A, with minimal effect on F and G.
> > 
> > 
> > I believe we can implement the above with either actual queueing or
> > simulated queueing. The latter would obviously avoid a latency cost.
> > 
> > 
> > Nodes which always reject will not bring down the network because they
> > only affect their part of the overall quota, and their RejectedOverload's
> > are not propagated (or if they are, they are propagated with local=false,
> > which given the abolition of client-end request throttling amounts to the
> > same thing). Timeouts do propagate (necessarily because of our HTL
> > system), which could be a problem; we can either backoff on a timeout, or
> > make the RTT include timeouts (even though it doesn't include local
> > RejectedOverload's). --
> > Matthew J Toseland -
> > toad at amphibian.dyndns.org Freenet Project
> > Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is
> > impossible. Our Boss says 
> > so._______________________________________________ Tech mailing list
> > Tech at freenetproject.org
> > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech
> 
> 
> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
> 

-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20060425/38cf56fb/attachment.pgp>

Reply via email to