On Thursday 14 August 2003 00:27, Ian Clarke wrote:

> > It seems like there are two components of the algorithm which are
> > separable from one another, both in Mnet and in Freenet.  One is how
> > to collapse the various kinds of measurements of performance --
> > latency, throughput, hit rate (== 1-DNF rate), connection-failure-rate
> > -- into a single scalar.
>
> Yes, we do this by having quite a well refined definition of what we are
> estimating, namely "The time required between routing to this node and
> getting the data, assuming the data is of average size".  This means,
> for example, that the cost of a Data Not Found message, assuming that
> the data *is* in Freenet, is the time required for this node to fail,
> plus the time required for another node to fetch the data.  This
> requires a few simplifying assumptions, but all have a rational basis.

I think the part that says "assuming that the data *is* in Freenet" spells 
trouble. The key space is very sparse - otherwise we would get collisions. 
That means that if routing is based on the key, we could randomly generate 
keys in a part of the key space and request them. This would result in the 
nodes specialising in that part of the key space to receive very bad routing 
ratings.

One solution would be for the learning to be bound to the requesting node 
only, but that means that a request would not do any good to the network.

Another option is to take a hash of the key (a hash of the hash) and base 
specialisations and routing on that, rather than the first hash (key). This 
would make it much more difficult to guess/generate keys in a part of the 
keyspace, thus making it more difficult to DoS a part of the network this 
way.

Or am I completely wrong here?

Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to