Mark J Roberts:
> Timm Murray:
> > Over time, the large node simply accumulates more data from Freenet.  This means 
> > there should be more nodes which point to data on the large node. Thus, there will 
> > be more requests routed to the large node.
> 
> Uhh... so? The node's big; it can handle lots of requests. That's
> not a problem. What's a problem is if Freenet says: "Hey, your node
> is 10% better than usual, so let's send every request to it!" I
> don't see why this would happen.

Looks like I totally, embarassingly missed your point the first time
around... I'll try this again. :}

You're right: it's true that the bigger your store, the more
requests your node will receive. And overloaded nodes are bad
because they make the network unreliable. So our objective is
obviously to prevent overload.

Well, I really detest artificially constricting the store size in
order to regulate request load. But I _do_ understand your point
now: larger stores demand more bandwidth. Low-bandwidth nodes need
some way to avoid overload, and constricting the size of the store
is one way to do it. The _wrong_ way to do it....

What's creepy about this are the various heuristics proposed to
accomplish it: "don't make a large store!", "if you have Y
bandwidth, use a X megabyte store!", etc. It's impossible to find an
acceptable one. Which means that nodes will have to detect overload
and adjust themselves. Which also means that we can stop promoting
this dangerous small-store idea - the recommended size of the store
should be based on the memory required to index its contents.

One solution might be to, when overloaded, set the datasource not to
yourself but to where you would've routed the request if you didn't
have the data. You'd really actually use a probability scaled by the
current throughput.

Needs more processing.

_______________________________________________
Chat mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/chat

Reply via email to