I implemented reference deprecation this morning, based simply on the idea that
references are remove when they fail (not all references to the node that
failed, just the one for the key that was attempted). This seems to work well,
but just as I was about to commit it struck me that while if other nodes are
down it is fair to remove their references one after the other, this creates a
huge problem if one's own network connection is down.
If one's own connection is down, it will go through all the references in the
datastore, fail on all of them, and therefore remove all of them. So one
Request from the client to one's own node before one realizes that the network
is down will eat it's entire datastore. Not a good thing.
The only solution I can think of is to limit the number of attempts a Request
makes before it sends back a RequestFailed. Instead of going to the end of the
DataStore, it would instead give up after failing to send to the 5 closest
nodes. This is not perfect either way (those 5 references get eaten if the
network is down, and clients could get back RequestFailed on very bad luck even
if the network is working), but possibly adequate. Do people think this is good
enough?
--
Oskar Sandberg
md98-osa at nada.kth.se
#!/bin/perl -sp0777i<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<j]dsj
$/=unpack('H*',$_);$_=`echo 16dio\U$k"SK$/SM$n\EsN0p[lN*1
lK[d2%Sa2/d0$^Ixp"|dc`;s/\W//g;$_=pack('H*',/((..)*)$/)
_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev