There's some coverage of techniques which scale to the hosting of
hundreds of thousands of processes per node across  indefinitely many
nodes, associated with the GPLed Java-based process hosting toolkit
called Diet Agents...
http://diet-agents.sourceforge.net

This is itself the foundation platform of choice for continuing EU
research projects in scalable autonomous systems...
http://www.cascadas-project.org/
...and some of our own work in scalable hosting...
http://cefn.com/blog/btok.html

The main emphasis here is on handling resource contention and providing
core operations only where the resource load is systematically
independent of the number of resources in the system.

The load of these core operations is independent of the number of
processes and interprocess communication channels locally and
independent of the number of hosts globally. 

There's a bit more in the tutorial...
http://diet-agents.sourceforge.net/.rsrc/tutorial.html#diet-philosophy
...but of course the code is its own best description :)

Cefn
http://cefn.com/blog/

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Rose
Sent: Tuesday, July 10, 2007 2:18 PM
To: theory and practice of decentralized computer networks
Subject: [p2p-hackers] Super Jumbo Uber Massive Networks


Yo,
   I read a good paper the other day talking about building optimally  
random networks (one of these [1,2] I don't remember which, but both  
are worth reading), and one of the ideas popped out as being obvious  
but seemingly lost in a lot of P2P literature.  They call it  
elasticity, the ability for a network to grow indefinitely without  
diminishing in performance.  (So the amount of work done by any node  
in the network can be bounded without respect to the size of the  
network.) Even in a lot of unstructured P2P systems there are  
fundamental issues with scaling up many of the algorithms people  
publish.  The world currently has about 1.1 billion internet users.   
If trends continue many more will join and a huge portion of these  
users will want to run their own sites, servers, clients etc.  If we  
want to make the future peer based, I think scalability is going to  
take on a new meaning.  So, with that in mind I wanted to ask if  
people know of any cool and interesting ideas, papers, or systems  
that they think could scale up to massive sizes.  (Lets say on the  
order of 100 million to a billion nodes.)  I'd like to put together a  
collection of P2P work that really scales, and I'm more than happy to  
put it up in a public place for everyone view.

-Jeff


[1]: http://people.inf.ethz.ch/spyros/papers/Gossip-based%20Peer% 
20Sampling.pdf  (Journal paper)
[2]: http://people.inf.ethz.ch/spyros/papers/Thesis-Voulgaris.pdf  
(Thesis, long but interesting)
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to