On Mon, Aug 21, 2000 at 02:28:42PM +0100, Michael ROGERS wrote:
> >No, your still not getting it. The known key values and what nodes they
> >reference too are still functions of the state of the Freenet as a whole,
> >that evolve through the natural process of the growth of the Freenet. The
> >speed of the connections is an external influence, not a function of the
> >Freenet at all.
> 
> I understand this distinction.

Good.

> >Now, we cannot claim that Freenet is free from external influences as it
> >stands, for example nodes going down is an external influence that happens
> >because of something totally unrelated to the state of the Freenet
> >(somebody turns off their machine, has a power outage, network issues, or
> >whatever). And since we can't get away from this we have to hope that the
> >natural evolution of the Freenet is not effected badly enough by this that
> >it cannot handle the situation (certainly, there is a level of volatility
> >in the lifetimes of nodes where Freenet simply ceases to function).
> 
> But *if* we can incorporate reliability metrics into the routing algorithm, 
> they will form part of the natural evolution of freenet. Freenet can evolve 
> and route around network damage at the same time. This could be very 
> valuable in isolating cancer nodes, for example.

No. The Freenet routing algorithm is a very ingenious (but still unproven)
idea that Ian dreamed up about how to route messages on an interconnected
network to locate data efficiently. We can't just say "we take this into
account" and think that that makes it a natural part of the system - it
doesn't.

> Unfortunately we probably can't make freenet route around damage just by 
> tacking reliability metrics onto the current key closeness algorithm. Any 
> variation in the key closeness algorithm between nodes consitutes network 
> damage, so you have to be careful to keep the reliability metrics for a 
> given node consistent between its neighbours. This brings us back to the 
> problem of a trust network between untrusted nodes; but there is still the 
> possibility that it can be made to work.

Yes, but any sort of trust system between the nodes is way way way off.

> >The same thing goes for trying to way by connection, there _is_ a level
> >where the routing becomes so sheered by differences in connection quality
> >between different nodes that it will not route to the data as we predict
> >it will. Is this level small enough to preclude any level of weighing by
> >connection? I don't know, but to claim that it does not exist is lunacy.
> 
> The logical extreme of weighting by connection speeds is that you only 
> communicate with the neighbour which has the fastest connection speed. That 
> doesn't break the network, it just means that you effectively only have one 
> neighbour.

No, it does break the network, because Alice inserts something and it goes
to her fastest neighbor only, while Bob requests that thing and the
request goes only to his closest neighbor so he doesn't find it. That is
what broken means.

> However I'm not a lunatic, and I'm willing to admit that there may be values 
> between "zero weighting" and "total weighting" which break the routing. We 
> need to experiment. We also have options beyond weighting the current 
> algorithm, such as using a new algorithm which incorporates reliability. But 
> designing such an algorithm needs deep thought and I'm supposed to be 
> working.  :)

Yes.

> 
> 
> Michael
> 
> _______________________________________________
> Freenet-dev mailing list
> Freenet-dev at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/freenet-dev
> 

-- 
\oskar

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to