Timm Murray:
> Large datastores tend to centralize the network.  Datastores don't fill up as 
> quickly and your node caches more data and less data falls out.  On the surface, 
> this seems like an advantage;  indeed, for a node operator's short term gain, it 
> is an advantage.  However, over the long term it tends to hurt routing.  Nodes 
> won't be requesting as much data from other nodes, and thus won't discover new 
> nodes through requests.

I haven't seen _any_ compelling argument why above-average nodes
should attract more than their "fair share" of requests. What's
yours?

I've heard the whispered rumors about simulations suggesting that
Freenet will deny some nodes traffic while overloading others, and I
think the problem, if there actually is one, can easily be fixed by
varying the datasource-reset frequency inversely with request load.

Anyway, I'm not too impressed with arguments that nodes won't see
enough requests. Freenet routing is grossly inefficient. Nodes are
going to be _falling_over_ with requests when it's actually used.

_______________________________________________
Chat mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/chat

Reply via email to