>No, your still not getting it. The known key values and what nodes they
>reference too are still functions of the state of the Freenet as a whole,
>that evolve through the natural process of the growth of the Freenet. The
>speed of the connections is an external influence, not a function of the
>Freenet at all.

I understand this distinction.

>Now, we cannot claim that Freenet is free from external influences as it
>stands, for example nodes going down is an external influence that happens
>because of something totally unrelated to the state of the Freenet
>(somebody turns off their machine, has a power outage, network issues, or
>whatever). And since we can't get away from this we have to hope that the
>natural evolution of the Freenet is not effected badly enough by this that
>it cannot handle the situation (certainly, there is a level of volatility
>in the lifetimes of nodes where Freenet simply ceases to function).

But *if* we can incorporate reliability metrics into the routing algorithm, 
they will form part of the natural evolution of freenet. Freenet can evolve 
and route around network damage at the same time. This could be very 
valuable in isolating cancer nodes, for example.

Unfortunately we probably can't make freenet route around damage just by 
tacking reliability metrics onto the current key closeness algorithm. Any 
variation in the key closeness algorithm between nodes consitutes network 
damage, so you have to be careful to keep the reliability metrics for a 
given node consistent between its neighbours. This brings us back to the 
problem of a trust network between untrusted nodes; but there is still the 
possibility that it can be made to work.

>The same thing goes for trying to way by connection, there _is_ a level
>where the routing becomes so sheered by differences in connection quality
>between different nodes that it will not route to the data as we predict
>it will. Is this level small enough to preclude any level of weighing by
>connection? I don't know, but to claim that it does not exist is lunacy.

The logical extreme of weighting by connection speeds is that you only 
communicate with the neighbour which has the fastest connection speed. That 
doesn't break the network, it just means that you effectively only have one 
neighbour.

However I'm not a lunatic, and I'm willing to admit that there may be values 
between "zero weighting" and "total weighting" which break the routing. We 
need to experiment. We also have options beyond weighting the current 
algorithm, such as using a new algorithm which incorporates reliability. But 
designing such an algorithm needs deep thought and I'm supposed to be 
working.  :)


Michael

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to