> > I haven't seen _any_ compelling argument why above-average nodes
> > should attract more than their "fair share" of requests. What's
> > yours?
> 
> Over time, the large node simply accumulates more data from Freenet.  
> This means there should be more nodes which point to data on the large
> node. Thus, there will be more requests routed to the large node.

Does FreeNet not take into account reliability, proximity, speed, etc of
nodes when fetching files? If it does I would think it could automaticlly
sense if a node was becoming saturated and find a more satisfactory link
and/or clone the resource?

Why is it a bad thing if machines with more resources do more work? As
long as the number of nodes is high enough to absorb any machine going
offline and all resources are cloned often enough that they never exist
only on a single machine (assuming they are being requested at all).

RedHat.com has a bigger machine with more bandwidth so they serve out the
most copies of xyz iso files. Smaller mirrors that are still greater than
normal machines also serve out the same files. Lots of small (home
use) type machines also serve out the same files. Asumming the network
could automaticlly locate and make available all these files it makes
sense for machines close to one of the small nodes to grab the file from
there first. If no small node with the file is close then try the mirror
node that is a little bigger. If no mirror node with the file is close try
the big node. I understand that FreeNet doesn't work exactly like that but
the structure seems logical as the bigger a node is the more traffic it
gets but most the traffic is still decentralized to small nodes. Is there
a technical reason that structure doesn't work?


_______________________________________________
Chat mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/chat

Reply via email to