On Wed, Nov 22, 2006 at 12:11:44AM +0000, Michael Rogers wrote: > Here are some preliminary results from the simulator - I must stress > that they're only preliminary. I haven't simulated token passing yet - > these results only show throttling with backoff, throttling alone, and > backoff alone. > > The load model is a bit simplistic: one in ten nodes is a publisher, and > each publisher has ten randomly selected readers. Each publisher > occasionally inserts a key, waits for ten minutes, then informs its > readers of the key; the readers then request the key. The publication > rate (and therefore the request rate) can be varied to investigate the > effect of load.
This is good IMHO. Obviously in real life there is a lot of very popular content, but that's "easy"; simulate the hard parts first. > > Each run lasted for three hours' simulation time, with the first hour's > logs discarded to minimise the effect of the initial conditions. > > All three mechanisms showed an increase in throughput under increasing > load, ie there was no congestion collapse. Throttling alone produced > higher throughput than either throttling with backoff or backoff alone, > especially under heavy load. Timeouts? > > All three mechanisms showed a decrease in success rate with increasing > load, suggesting that congestion collapse might eventually occur at high > enough loads. So increased load causes misrouting. Okay, we know this. There's a tradeoff we need to make with any load balancing scheme... > Throttling alone produced a higher success rate and slower > degradation under load than either throttling with backoff or backoff alone. Try with some nodes severely overloaded for external reasons. That is the basic reason for backoff IMHO. The problem is that if some nodes are severely overloaded, then requests to their keyspace will always fail... this is bad! (You could try with per-node failure tables...?) > > This suggests that the backoff mechanism is not effective in controlling > load, and the request throttle would work better without backoff. These > conclusions are only tentative though - much more remains to be done, > when I can find enough disk space for the logs! Maybe there are some other, simple possibilities for dealing with a single slow node. Well obviously there is token passing... What we want is if a node is so overloaded that it can't serve any requests, we should avoid it; otherwise, we want to reduce the number of requests to that which it can handle (possibly shrinking the specialization to accomplish this). Token passing combined with queueing might work well. > > Cheers, > Michael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20061122/4ab7d90e/attachment.pgp>
