You can't really ignore the implications of NATs and firewalled nodes that easily since most computers on the Internet these days are behind NATs or firewalls.

But even if you do ignore their existence, the determining factor of whether two nodes in a P2P network can communicate is that they know of each other's existence, and that they know each-other's location in information space (ie. not just their location in IP space).

It is not realistic to assume that every node in a P2P network will have this information for every other node in the P2P network, at least not if you want the network to be scalable, and so it is necessary for nodes to select a subset of all other nodes in the P2P network with which they can communicate.

Of course, the practicalities of operating a P2P network, which include issues such as establishing cryptographic tunnels, and dealing with NATs and firewalls, provide significant additional motivation for restricting the subset of nodes with which a particular node might seek to communicate with.

Ian.

On 10 Mar 2006, at 02:08, Jeff Rose wrote:

It seems like people are always putting arbitrary restrictions on p2p systems and simulations in terms of connectivity, but is this really necessary? Unless you are trying to use NATed nodes (assume we can punch or route through a neighbor),just about any pair of computers on the internet can be neighbors. In essence the internet is a fully connected overlay graph. All DHT's and other less-structured schemes are doing is deciding which links to send messages down. So when you talk about "links existing" you just mean that a given pair maintains some amount of regular communication, or just that they know of each others existence in the network? Maybe since you are coming from the freenet side of things connectivity has a lot more meaning than in other schemes?

-Jeff

Ian Clarke wrote:
On 3/7/06, *Ranus* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
    Hui Zhang has published a paper
named "Using the Small-World Model to Improve Freenet Performance". It
    should correspond to your idea, so maybe you could read that.
Be careful of this paper. If I recall correctly, most of their results can be attributed to the fact that they ensured that links existed between adjacent nodes in the graph, which obviously would have a dramatic beneficial effect relative to a network where local links may be missing as it means that in the worst case you will do an exhaustive search for the node you are looking for just by following local links. Our findings, as presented in Oskar's thesis, are that Freenet- style edge selection results in the desired degree of clustering without "artificial" help.
Ian.
--------------------------------------------------------------------- ---
_______________________________________________
p2p-hackers mailing list
p2p-hackers@zgp.org
http://zgp.org/mailman/listinfo/p2p-hackers
_______________________________________________
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

_______________________________________________
p2p-hackers mailing list
p2p-hackers@zgp.org
http://zgp.org/mailman/listinfo/p2p-hackers
_______________________________________________
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences


_______________________________________________
p2p-hackers mailing list
p2p-hackers@zgp.org
http://zgp.org/mailman/listinfo/p2p-hackers
_______________________________________________
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

Reply via email to