On 6/21/06, Matthew Toseland <toad at amphibian.dyndns.org> wrote: > On Wed, Jun 21, 2006 at 05:17:45PM +0100, Michael Rogers wrote: > > Thought experiment: we have three peers: one fast, one medium, one slow. > > We answer roughly 1/4 of incoming requests locally, and forward roughly > > 1/4 to each peer. How many requests should we accept? > > IMHO we should slow down to the speed of the *median* peer - the medium > one. An ubernode *must not* cause us to accept too many requests which > are then all misrouted to it. However if we have a single peer on a > dial-up then it's not unreasonable to route most of the traffic we would > have sent to that peer to the next-best peer. > > > > If we slow down to the speed of the slowest peer and our neighbours do > > likewise, the slowest node will determine the speed of the whole > > network. > > Indeed, this is bad. > > > If we exclude nodes below a certain speed, we waste their > > resources and don't offer them anonymity. > > This is also bad. We don't want to have to require top end broadband for > freedom of digital speech! > > > If we misroute whenever a peer > > is busy and run at the speed of the fastest peer, one ubernode can > > attract a large share of the network's traffic. > > Right. > > > > We need a compromise - a limited degree of misrouting. > > > > Let's define the imbalance factor i = r_max / r_natural, where r_max is > > the maximum rate at which a peer is allowed to accept requests, and > > r_natural is the arrival rate of requests that would ideally be routed > > to that peer. The value of i determines how much misrouting we will allow. > > > > Let's say i = 2. In the example above, r_natural is 1/4 for all peers, > > so r_max is 1/2, meaning that no peer should be given more than 1/2 of > > the requests on average, no matter how many it's willing to accept. This > > allows us to run somewhat faster than the slow peer, and our neighbours > > can run somewhat faster than us, etc - a few slow peers don't drag down > > the whole network. > > I do think that we should take into account the median ... fewer > arbitrary parameters is generally better. > > i = 2 essentially means we can send up to twice the number of requests > to a peer as we ought to, correct? I think this is a bad way of looking > at it ...
So, if we have 3 nodes conected, which can handle 1, 10, and 100 requests/min (for example), then we accept at most 21 requests/min? And the 6 extra that can't go to the slow node get routed to the next-best node? If we're knowingly misrouting around slow nodes, then it seems to me we should make a specfic effort to have the one request that can go to the slow node be the one that it is most likely to be able to serve, instead of the one that happens to arrive first. Evan
