Matthew Toseland wrote:
> Oskar tells me that the following will work a lot better than our
> current strategy for storing data, according to his simulations:

Well, I said other strategies than the current worked a lot better _in_
my simulations. You can take that as you wished.

> We have a separate cache and store. Both are LRU. The cache stores
> everything which passes through the node (possibly excluding locally
> originated traffic on more paranoid nodes). The store stores ONLY data
> from inserts, and it only stores it if the HTL was reset on that node
> because it was a new best location for the request. (Is this right
> Oskar?). The store would probably be larger than the cache.

This isn't exactly what I simulated (though it is similar with the
addition below). I got the best results in simulation from:

During both inserts and requests, the query continues until it
terminates by having gone H steps without finding a new closest node.
After the query is finished, and if it was an insert or the data was
found, the data is copied to the N nodes found that were closest to the
key value.

I never simulated a secondary store, I simply suggested that it could be
used to help make popular data available without overloading its closest
nodes. I don't know if it is a good method: I think I share (what I
perceive as) Ian's gut instinct that it would be nice with a single,
adaptive, data storage system, but I don't know how to do it. If anybody
does, tell me and I'll try it.

I used a large H (20) and a small N (3), and this is in a very small
(2000 node) highly connected (~30 neighbors average) network.

> Most DHTs include a two level store, as oskar pointed out. The best
> results on a static network come from storing it only on the single node
> closest to the target; storing it on the 3 nodes closest to the target
> seemed to work well on a dynamic network. But that's hard to implement,
> hence the suggested solution of storing the data on the "peaks".

I have simulations on active networks with node churn as well.

// oskar

Reply via email to