Matthew Toseland wrote:
> *IF* random replacement is viable, and mrogers' simulations are promising, 
> but 
> we need some more simulations with a more realistic model of requestors etc, 
> a simple random replacement table would be a good solution.

If you have some suggestions for a more realistic model, I might be able 
to run some more simulations over the next couple of weeks. Here's what 
I'm currently doing:

* Each key is permanently stored at the node nearest its location 
(idealised model of inserts because I'm only interested in caching at 
the moment and that's dominated by requests)

* Each key has a request rate drawn from some distribution. The 
distributions I've simulated so far are uniform (all keys have equal 
request rates) and Zipf (some keys are vastly more popular than others - 
this is the distribution seen by web caches)

* Requests for each key are generated by a Poisson process with the 
given request rate, and each request originates from a node chosen 
uniformly at random

* There's no churn, no swapping, no network congestion, no backoff...

* Requests nearly always succeed within 10 hops (this in itself shows 
there's something wrong with the model)

* For successful requests, I measure the number of nodes visited before 
the data's found (ie the search traffic) and the depth of the search (ie 
the transfer traffic)

I guess a more realistic model would introduce some dependencies between 
the keys to represent splitfiles. Perhaps it would also have a more 
realistic hop counter, bandwidth limits, churn... The problem is how to 
capture any details that might affect the choice of cache replacement 
policy without making the model too complex, because everything has to 
fit in RAM.

Cheers,
Michael

Reply via email to