On Tue, Aug 22, 2000 at 07:23:46AM +0200, Neil Barsema wrote:
> Scott and Oskar,
> 
> I'm really touched in your faith in the current routing mechanism, but I
> think that if you ignore the physical network Freenetruns on, it will never
> be more than an interesting experiment.

If it doesn't work, it won't be anything at all.

I have never said that weighing by connection can't be done. I have said I
that I don't know that it can be done, and that the idea that it will work
for sure as long as all nodes are using the same algorithm to weigh their
connections is nonsense.

Here's to hoping that it can be done (together with all the other
questionable ideas I put on the list that this thread originated from) -
but don't try to tell me it is a given until you have either a good
simulation or the math to back it up.

> We claim that Freenet moves information to where it is most wanted, as an
> example we say a piece of information originating in the States only needs
> to cross the Atlantic once. However if we route completely independant from
> the physical network the 'closest' node could be some highschool kids
> dial-up connection in Hawaii. So all the traffic would end up crossing the
> Passific!

That example has always been incorrect.

> Before a line of code was even written we were discussing this problem and
> agreed some sort of refinement of the closseness metric would eventually be
> necesary to take the physical network into account. Ping speeds where
> mentionend the layout of the IP addresses (favoring nodes on your local
> subnet) and raw Speeds.
> 
> In my view the routing mechanism translates to find the best node to forward
> the request to, and do this consequently.However a good Node is not
> determined by keyspace specialization alone, there are more factors like
> speed, datastore size, its connectedness to the rest of Freenet and probably
> some more.

Either we find the data or we don't - we can't start making up arbitrary
requirements for what a good node is without concerning ourselves with
it's impact on the very very thing the routing is supposed to achieve.

> A couple of weeks ago I suggested using 2 or 3 parralel requests, and
> storing the first reply as the reference this means all the factors
> mentioned above are taken into account in determining the best node. Ian
> picked up on this an refined ot to 2 requestst some of the time (random
> forking).

I never understood what this achieved at all. Say Alice forks a request,
sending one message to Bob and one to Charles (the two closest references
for the key).

Charles is fast and routes the request onward, finding the data with David
and sends a reply back with the Data and David as the DataSource. David is
Alice's new reference for the data.

Bob is slow and routes the request onward, finding the data data with
David and sends a reply which Alice discards. How is taking David's
address as the reference on data she retrieved from Charles instead of
data she retrieved from Bob mattering any to Bob or Charles?

I'm not against limited forking of Requests, I considered and posted what
changes would be needed to the protocol for it to work - but as long as we
have the current far reaching (average 30 hops) references, I don't see
how it would help at all.

> In a mature network this might suffice, but I'm not sure we need to be so
> stingy regarding requests I mean Gnuttella does broadcasts for Buddah's
> sake!

According to there own admission, Gnutella scales by seperation. This
means that any one Gnutella user cannot reach users beyond his horizon -
which is not something we want. They do this because they know the whole
network would sink below it's own weight.

The math of broadcasts is extremely simple:

If Cn is the capacity every node adds, and Tn is the traffic every node
adds, then all you have to do is solve the equation:

Cn * N = (Tn * N)^2

to find N, the maximum number of nodes that can be in contact with one
another before the network is saturated. By making Cn >> Tn , you can make
N pretty large, but it is always limited.

> The advantage of using paralel requests all of the time is that it reduces
> the effect off nodes dissapearing.
> 
> 
> Neil
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Freenet-dev mailing list
> Freenet-dev at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/freenet-dev
> 

-- 
\oskar

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to