On Thu, Apr 13, 2006 at 05:39:58PM +0100, Michael Rogers wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Matthew Toseland wrote:
> > I mean we won't get enough information to have a useful and accurate
> > rate for each pair.
> 
> I see your point, but hopefully it won't be necessary to have a very
> accurate rate for pairs that don't communicate much. The rate only needs
> to adapt quickly if there's a large amount of traffic, doesn't it?

I suppose so...
> 
> > This is not a serious problem, because we are not really dealing with a
> > point to point routed network here. The same proportion of requests from E
> > and from A will reach F. If D is overloaded, then everyone needs to slow
> > down, because *everyone* is talking to D. If they are further away they
> > probably talk to D less and so need to slow down less.
> 
> But that doesn't solve the problem of pushing the load back to A. If A
> causes D to become overloaded and everyone else who's using D slows
> down, A has no incentive to slow down. By pushing the load back along
> the path of the request that triggered the overload, you can quench
> sources that trigger more than their fair share of overloads.

Are you saying that this won't push the load back along the path? I
believe above we are talking about this:

    E   F
    |   |
A---B---C---D

D is overloaded. A is sending far more requests than anyone else, many
of which end up at D.

C reduces its send rate to D.

Therefore, F and B reduce their send rates to C.

Therefore, E and A reduce their send rates to B.

The problem is that if we use a naive understanding of queue fairness,
C will reject the same proportion of both B's and F's requests. F's
requests are very few, so they should be given priority; the node
producing more requests should have the greater proportion of its
requests rejected, then the load will propagate back properly to its
source. We might also want to factor in the node's utility to us.

Is there a non-alchemical way to define such a queueing algorithm?

Actually, this IS easy. If we use simple round-robin queueing (as
opposed to some form of simulated queueing), we get exactly the desired
effect: We take a request from B, then a request from F, then a request
from B, then ... Obviously since F isn't producing many requests,
generally its requests will be accepted, and many of B's requests will
be rejected.

Now, how to do that with simulated queueing (which is necessary IMHO to
minimize latency) ? We want something that behaves exactly as above, but
which does not involve actually delaying the requests.

There must be some way to do this...

Target rate t
Rate from B b
Rate from F f

We know:
b + f > t (otherwise we don't need to drop any)

If b >> f, then we can simply allow through all F's requests and reject
(b+f-t) requests from B.

But in general:

Lets say b = 100, f = 3, t = 50.

From t, we know that every 2/50ths of a second, F gets an opportunity to
send. If F does not have anything to send it goes to B.

Since f = 3, in 3 of these 25 slots, we will be able to send the request
immediately.

The other 47 slots we can allocate to B. So we pass through all 3 from F
and 47 from B, rejecting the other 53.

We can formalize this:

Let n = number of peers
b, f, t as above
Let q = t / n = the number of requests which each peer is guaranteed to
be allowed to send

Then, a peer which sends q or less requests will have all of them
forwarded. A peer which sends more than this will have its first q
accepted.

Let l = the number of remaining request slots

We then allocate l to the over-capacity peers. If we can allocate all
the extra requests, no problem. If we can't, and there are several
remaining peers with extra requests:
- Rank the peers by the number of requests over capacity they are.
- Allocate to the peer which is least over-capacity, then to the peer
  with the next fewest over-capacity requests etc until we run out of
  spare requests.
  (or some similar but smoother and more mathematical allocator)


Now, this is exactly what fair queueing does, right? If so, there must
be better algorithms than the above, but in any case, all we have to do
is queue fairly and the load will propagate AUTOMATICALLY back to the
source.

And we don't need to track stats per-path.
> 
> But then the problem is how to avoid throttling other paths that happen
> to share links with the path you want to throttle.

Easy. Only throttle the flooders, as above. The hard part is deciding
exactly how to do this.
> 
> I understand that no path will necessarily be used more than once, but
> if the throttlng travels back towards the source then that doesn't
> matter: like radiation therapy, the various rate reductions will meet
> near the source of the traffic.

Right.
> 
> > A-B-C-D-E-A (benzene ring)
> > 
> > Congestion at E, so E-D slows down, C-D slows down, B-C slows down, A-B
> > slows down, E-A slows down...
> 
> So the speed of every node in the ring is limited by the speed of the
> slowest node... is this still true if the ring's embedded in a larger
> network?
> 
>   F---E
>  /     \
> A       D---G
>  \     /
>   B---C
>        \
>         H
> 
> If you only keep one rate per neighbour then congestion at E will slow
> down CD, affecting HCDG. But if you keep one rate per pair of neighbours
> then HCDG will be unaffected, because C will reduce its rate from B to D
> but not its rate from H to D.

How does this work out with fair queueing?
> 
> Cheers,
> Michael
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to