On Sun, 16 Apr 2000, Ian Clarke wrote:
> Oskar Sandberg wrote:
> > 
> > How do we want the discouraging of nodes that don't reply to work? Should we
> > simply remove the the reference in question (which doesn't mean removing all
> > references to that node, only one at a time) or should we include some sort 
> > of
> > extra blacklisting system where nodes that have bad uptimes are avoided by 
> > some
> > probability, or is it time we did a complete redesign of how the node 
> > chooses
> > nodes to forward to so that, together with closeness, it also considers the
> > reliability, locality, and speed of the other node.
> > 
> > Can we decide this now?
> 
> We could implement this in stages, as all of these changes will be
> backward compatible with the current system.  Initially I would suggest
> that when a connection to a node fails, all references to that node are
> replaced with the address of the node corresponding to the closest key
> to the key associated with the failed node.  This may seem harsh, but
> duff nodes are so frequent now that I think we should have a
> zero-tolerance policy.

I'm not sure about that. One of the issues with the little used network is that
we also have trouble with the nodes not finding other nodes very well, which
means this could lead to empty stores pretty soon.

I'm also not sure about the "replace with node that has the next closest"
because there is no reason why a node could not end up having two regions of
the keyspace clusert towards them, one which might also cluster to node A, but
another which node A has never heard of. If that were the case, it would suck
if all the references suddenly pointed at node A when that node went down.

> I think that we can then start to think about how we can incorporate a
> bias towards sending messages to closer nodes / nodes with lower
> ping-time - the strength of this bias should be configurable.  We could
> even measure speed of connections to particular nodes on-the-fly as we
> retrieve data from, or send data to, those nodes.  Perhaps a hashtable
> mapping node addresses to data through-put rates, although we shouldn't
> forget to remove data about nodes we no-longer reference.  Nodes we
> don't know anything about should be assumed to have an average
> through-put.  As for reliability, that is not an issue with a
> one-strike-your-out policy, and I think locality is probably irrelevant
> provided we are measuring throughput.

Yeah, locality is probably irrelevant if throughput is weighted right on the
Internet, but there might of course be situations where it is not. 

> Ian.
> 
> _______________________________________________
> Freenet-dev mailing list
> Freenet-dev at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/freenet-dev
-- 

Oskar Sandberg

md98-osa at nada.kth.se

#!/bin/perl -sp0777i<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<j]dsj
$/=unpack('H*',$_);$_=`echo 16dio\U$k"SK$/SM$n\EsN0p[lN*1
lK[d2%Sa2/d0$^Ixp"|dc`;s/\W//g;$_=pack('H*',/((..)*)$/)

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to