On Fri, Apr 14, 2006 at 10:24:53AM +0100, Michael Rogers wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Sorry to contradict what I said yesterday, but last night I tried to go
> through the argument from first principles and came to a different
> conclusion.
> 
> What are we trying to achieve with load balancing / congestion control?
> 
> 1. Reduce load on overloaded nodes
> 2. Don't redirect load down longer paths
> 3. When capacity is greater than traffic, give everyone as much as they
> ask for
> 4. When traffic is greater than capacity, give everyone a fair share

Above all: Propagate load back to the originator.

"A fair share" means that those who send few requests should have a
higher success rate than those who send many requests - even as a
proportion of the total they have sent. This is necessary and sufficient
for load to propagate back to the flooder.

> My proposal yesterday tried to solve all 4 points with one mechanism
> based on end-to-end feedback, but as Matthew pointed out, there's no
> point adjusting the rate of a path that's not going to be used again.
> Also I'm not sure the mechanism I proposed yesterday would ensure
> fairness - the rate of a link changes according the ratio of successes
> to failures, and there's no reason to think that a node that produces
> more than its share of traffic will have a higher proportion of failures
> on anything except the local link. So I've come to the conclusion that
> fairness should be enforced on the link nearest the sender, and flow
> control should be enforced on the link nearest the (overloaded)
> receiver. Here's my new suggestion:
> 
> 1. AIMD flow control on each link. The rate is increased in response to
> local acks, and decreased in response to local packet loss, local
> timeouts and local overloads. The rate is not affected by end-to-end
> replies, failures, overloads or timeouts.

Agreed partially; the rate should be reduced in response to local
overloads *including those generated by queueing*!
> 
> Rationale: this mechanism is meant to deal with two problems: packet
> loss due to congestion in the underlying network, which should result in
> a TCP-friendly slowdown, and bandwidth/CPU overload at the neighbour,
> which should be relieved by finding the maximum rate at which the
> neighbour can process messages. Instead of pushing load back to the
> sender, fair queueing limits the bandwidth available to the sender in
> the first place.

Strongly disagree. We can push load back to the sender simply by running
AIMD on each node with an appropriate queueing algorithm.
> 
> 2. A token bucket for each incoming link, filled at equal rates, with
> the total rate determined by the user-specified outgoing bandwidth limit.

What's this for?
> 
> 3. A token bucket for each outgoing link, filled at a rate determined by
> the AIMD flow control on the link (see above).

And this?
> 
> 4. Forward a packet if there are enough tokens in the incoming and
> outgoing buckets; otherwise send an overload message.

Hmmm. I'm not sure this is necessary... it would be nice to ensure that
we don't accept requests that we can't possibly process based on
bandwidth alone, but it depends on other factors e.g. probability of
success, and any good overall limiter will easily handle what it would.
> 
> Rationale: the incoming buckets ensure long-term fairness between
> neighbours while allowing short-term bursts. To avoid wasting bandwidth
> when a neighbour uses less than its share in the long term, when a
> bucket overflows the tokens are redistributed to the emptiest bucket.
> This allows unfairness when there's excess capacity while enforcing
> fairness when capacity is scarce.
> 
> Unlike my suggestion yesterday, this mechanism requires two buckets per
> neighbour rather than one bucket per pair of neighbours.
> 
> Open problems:
> 
> Should each local client have its own incoming bucket, or will that just
> encourage clients to open multiple connections? Should all clients share
> a single bucket? Should local clients get a fixed share of the bandwidth
> (say 10%), independent of the number of neighbours?

Good questions. Can be addressed later.
> 
> Should a well behaved sender adjust its rate in response to a non-local
> overload? Or should it keep sending at the same rate, relying on fair
> queueing and the fact that its next request is unlikely to reach the
> same overloaded node? What about retransmissions?

Fair queueing, in the sense of "those who send many requests will have
many of them fail... those who send few requests will have few of them
fail" (as a proportion), will result in load propagating back to the
source, if properly implemented. This is *essential*.
> 
> Cheers,
> Michael

I'm not sure what you're trying to do here. Please read my other email.
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20060421/3341c041/attachment.pgp>

Reply via email to