On Sunday 18 June 2006 08:27, Michael Rogers wrote: > Ed Tomlinson wrote: > > If you are using tokens to control sends then why the high level AIMD? We > > already have a packet level AIMD to control bandwidth so the token system > > should suffice to balance load. > > I agree, packet-level AIMD congestion control and request-level token > buckets are enough. (Actually the token buckets could probably be > implemented at the packet level as well; you'd have to inform your peers > of changes more frequently because of the finer granularity, but the > receiver window / tokens available field can probably be piggybacked on > an existing outgoing packet in most cases.) > > > Base assumption is that fair scheduling is what we really want. Is it > > really? > > I'm open to other suggestions (eg the ability to allocate more bandwidth > to your close friends), but fair scheduling is the simplest so I think > it should probably be the default. Once fair scheduling is implemented > we can experiment with other algorithms just by changing the way tokens > are allocated to buckets, which should be a reasonably self-contained > change.
The problem I see with fair scheduling is that freenet is going to want to route more packets to some nodes. Which nodes will depend on the distribution of locations. Keys should be evenly distributed, locations of peers will tend to cluster (most peers will tend to have locations close to yours). Nodes at the edge of a cluster will tend to get more packets. So fair distribution may well hurt us. We probably want to size a peers token bucket relative to the ammount of the keyspace the peer is the best path for (resizing after all location swaps). > > How do we control the number of tokens? I see how we create and use tokens > > but what decides how many we need start with, how do we know if we are > > using too few or too many? > > The size of the buckets determines how large the traffic bursts from > each peer can be - we can experiment with different values in the > simulations. Hopefully we can find a metric that allows the node to self adjust. > > With the above scheme a node is in effect backed off when is > > out going token bucket is empty. > > Agreed, but the backoff periods will be much shorter. This is aways what we hope happens - see metrics suggested below. > > Another observation. Quite a few message types ignore the backoff > > conditions > > (eg Swap* messages). We should look closely at these and decide if they > > ready should be bypassing the backoff controls. > > Yup, the bandwidth limiter should take all traffic into account (except > maybe keepalives) and should be changed from a leaky bucket to a token > bucket to reduce latency. Bandwidth control (packet level) does look at all packets. I am talking about the message level. It makes sense not to throttle replys to intransit message. It does make sense to control messages that add load the to network when we are limited. Maybe throttling swap requests would be a good thing? > > The patch I sent to toad_ and mrogers implements the high level (on > > FNPSSKDataRequest, > > FNPCHKDataRequest & FNPInsertRequest messages) (G)AIMD. It sends until a > > window > > is exceeded and then waits for requests to complete before opening the next > > transmit > > window. It also implements part of the metrics mentioned above. > > I thought you were arguing against high-level AIMD above? We now use a ethernet like alg to control backoff. My patch changes this to (G)AIMD which might be better. If we use a token scheme both the existing and/or my (G)AIMD scheme probably can be scrapped. Do you see any value in implementing the metric mentioned previously? Node percent of time backed off Peer percent time backed off Node percent of time routed to non optimal node due to backoff With the above metrics we could how well a backoff scheme works and what effect this has on routing. Thanks Ed
