On Fri, Apr 28, 2000 at 03:17:10PM +0200, Oskar Sandberg wrote:
> 
> I implemented reference deprecation this morning, based simply on the idea 
> that
> references are remove when they fail (not all references to the node that
> failed, just the one for the key that was attempted). This seems to work well,
> but just as I was about to commit it struck me that while if other nodes are
> down it is fair to remove their references one after the other, this creates a
> huge problem if one's own network connection is down.
> 
> If one's own connection is down, it will go through all the references in the
> datastore, fail on all of them, and therefore remove all of them. So one
> Request from the client to one's own node before one realizes that the network
> is down will eat it's entire datastore. Not a good thing.
> 
> The only solution I can think of is to limit the number of attempts a Request
> makes before it sends back a RequestFailed. Instead of going to the end of the
> DataStore, it would instead give up after failing to send to the 5 closest
> nodes. This is not perfect either way (those 5 references get eaten if the
> network is down, and clients could get back RequestFailed on very bad luck 
> even
> if the network is working), but possibly adequate. Do people think this is 
> good
> enough?
>


Oh man - I know this one is going to bite me, probably multiple times. Ideally,
the bad references should be marked, maybe with a count, and removed after a
message has been sent. But, since I can't write this, at least right now, I'll
be happy with whatever works :-)

Any thoughts about a maximum htl? There was another request with htl > 150
bouncing around the network this morning, my node saw it at least a dozen times.

David Schutt 

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to