On Tue, Jun 06, 2006 at 07:42:47PM +0100, Michael Rogers wrote:
> 
> 1. Congestion control - retransmitting lost packets without reducing the 
> transmission rate would lead to congestion collapse, so we need to slow 
> down when the link is congested. However, sharp changes in the 
> transmission rate should be avoided because they could have knock-on 
> effects for other nodes. I'm going to use the omnet++ network simulator 
> to investigate an AIMD (additive increase, multiplicative decrease) 
> congestion control algorithm similar to the one used by TCP. The 
> increase and decrease parameters should be chosen to compete fairly with 
> TCP while giving a smoother transmission rate.

Why bother? Hasn't this been simulated a thousand times already? Isn't
there extensive literature?
> 
> 2. Load control - at the moment we wait for a peer to become overloaded 
> and start rejecting requests, then we back off for a while. I think we 
> can avoid rejecting requests in most circumstances by using an explicit 
> receiver window as in TCP (the receiver window is distinct from the 
> congestion window mentioned above, and doesn't use AIMD).
> 
> Each peer tells us how many more requests it's willing to accept based 
> on the rate at which it's currently able to process requests. The window 
> sizes specified by our peers tell us roughly how many requests we can 
> forward. We can take this into account when calculating our own window 
> sizes, thus propagating load information upstream before it becomes 
> necessary to reject requests.

Sounds like MRIs. I thought we'd agreed on an AIMD-based and
token-bucket based system that propagated load back to the sender?
> 
> 3. Load balancing - this is a complicated problem, partly because it's 
> political as well as technical. I suggest tackling the problem in two 
> stages: first we define the ideal policy for allocating resources, and 
> then we look for mechanisms to implement that policy.
> 
> To get things started, here's the policy I think we should adopt:
> * When there are enough resources to handle all requests, handle all 
> requests, no matter how unequal the load distribution
> * When there are not enough resources to handle all requests, give each 
> peer an equal share of the resources

NO! We need to give a greater share to those nodes which are sending
fewer requests. Unless we do this, load will not propagate back to the
sender. Load propagating back to the sender is an ABSOLUTE requirement
for ANY changes to load limiting. We must automatically limit floods,
and ensure that client nodes which are sending too many requests reduce
their send rate.

> * Give local clients priority over peers
> 
> The last point is open to debate - it's supposed to reduce the incentive 
> for people to modify their nodes in order to get better performance. An 
> alternative would be to lump all local clients together and treat them 
> as a single peer, but then the amount of resources given to local 
> clients would depend on the number of peers, which seems wrong.

Sounds like a bad idea to me. It might have security implications, and
in any case we have effectively a single local client in
ClientRequestScheduler; lots of requests are queued in the node, and
every <period determined by load balancing>, we start one.
> 
> Here's a mechanism that implements the policy above:
> * Keep a token bucket for each peer
> * Before handling a request from a peer, remove a token from its bucket
> * If the bucket is empty, reject the request
> * After processing a request from a peer, add a token to the bucket of a 
> random peer
>     * Tokens are created at the same rate requests are processed (see 
> load control above)
>     * All peers get an equal number of tokens
>     * If the chosen bucket is full, add the token to the emptiest 
> bucket instead (excess resources are distributed to whoever needs them)
> * After adding a token to a bucket, inform the peer of the number of 
> tokens in its bucket (this is the receiver window size - see load 
> control above)

Hmmm. This is significantly different from your previous proposal. And
it does not address overload.
> 
> I'm open to suggestions for other policies and mechanisms that implement 
> them, but without wishing to stifle debate, I'm not particularly 
> interested in suggestions for mechanisms that don't implement 
> well-defined and widely agreed policies. The simulator code will be in 
> SVN if anyone want to simulate their own mechanism and see what policy 
> it implements. ;-)

We need you to simulate the original proposal firstly. That proposal
didn't address #4, but it did propagate load back to source and deal
with all the other problems.
> 
> The mechanism above leaves one thorny problem unanswered: if a request 
> can't be forwarded to the best peer because of congestion control or 
> load control, should we send it to the next-best peer instead, or should 
> we reject it? Both solutions have the potential to waste bandwidth; 
> routing to the next-best peer might solve the problem of slow nodes 
> occupying large portions of the key space. Hopefully simulations will 
> settle this question.

Routing to the next-best peer will result in requests always being routed
to the fastest node, and the whole network collapsing. At least, that
was our experience on 0.5.
> 
> 4. Resource contribution - it might be a good idea to give users an 
> incentive to contribute resources to the network. A few quick points:
> 
> * Free riding might not be such an urgent problem in friend-to-friend 
> networks as it is in open networks
>     * People might be reluctant to short-change their friends
>     * We can probably achieve a lot through peer pressure (ouch)
>     * Maybe put a few statistics on fproxy's darknet page
>         * Percentage of time each peer is online
>         * Number of requests sent/received
>         * _Not_ number of responses sent/received (see below)
> * On the other hand, resource contribution is also a security issue - an 
> attacker who can use resources without contributing might be able to DoS 
> the network
> * Incentive mechanisms can go wrong in a couple of ways:
>     * Reward apparent good behaviour without hard evidence
>         * Creates an incentive to lie
>         * Honest nodes are punished for being honest
>     * Reward the subset of good behaviour that can be verified
>         * Creates an incentive to prioritise verifiable behaviours
> 
> With regard to the last point, I'd like to investigate the effect of 
> prioritising requests (which have verifiable responses) over inserts 
> (which don't). This will tell us whether an incentive mechanism based on 
> the number of requests each peer answers could harm the network. But 
> this is a late addition to the SoC work, and the other jobs (congestion 
> control, load control, load balancing) will take priority.

I think we should defer work on #4 until we have something solid to deal
with the other problems. We need a new load balancing algorithm that:
- Treats all nodes equally, and in particular doesn't give away any
  information about which requests are local (our current one does give
  away some timing info on this).
- Limits load; if nodes are consistently rejecting too many requests on
  load, we should reduce everyone's send rate.
- Propagates load back to its source.
- Limits floods.
- Works with inserts. (our current one doesn't work very well with
  inserts).

I believe we came up with a proposal a while ago to address these
issues; you should simulate this at some point, unless it is completely
impractical, whatever else you model.
> 
> Anyway, I hope this gives you a better idea of what I'll be working on 
> this summer. My current task is to implement the freenet protocol 
> (including backoff, swapping and routing) in a discrete event-based 
> simulator, with realistic values for node bandwidth, link latency, etc. 
> I also need a realistic load model, which will probably involve digging 
> through a lot of frost logs. :-) Once I'm happy with the simulated 
> network, I'll use it as a testbed to evaluate the new mechanisms.
> 
> Cheers,
> Michael
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20060607/803477d6/attachment.pgp>

Reply via email to