On Thursday 24 January 2008 21:16, Michael Rogers wrote: > Matthew Toseland wrote: > > Two different failure modes. RNF means we ran out of nodes, DNF means we ran > > out of HTL (randomly chose to end the request). > > Right, sorry, at one stage I was thinking they should be merged if we > switched to a weighted coin, so I lumped them together. I'm not sure > whether I still think so (maybe RNF carries some useful information for > the requester?) but I do think they should be treated identically in > terms of branching (see below). > > > Per node failure tables would probably be a > > good solution: When we get a DNF, we don't route to that node the next time; > > Sounds sensible. > > > Firstly, why is this useful? Can't we just kill the request on getting a DNF, > > and route differently next time? > > We could - I hadn't thought of using failure tables, I was just trying > to get as close as possible to the current search behaviour (ie limited > branching). > > > When we receive a request, we have a pDrop chance of killing it at that point. > > If we don't, we route it. > > Just to check: "at that point" means after checking the local cache and > store, right?
Yes I think so. > > > If a node gives us RejectedLoop, we move on; if a > > node gives us a RejectedOverload, we evaluate pDrop again (we don't want to > > keep searching for a non-overloaded node forever, do we?). > > Maybe we should adjust the probability as we would with a DNF - counting > overloaded nodes as visited will automatically adjust the load produced > by each request to the amount of overload in the network. We want to reduce the number of requests, not make the requests we do use less load and therefore be less successful and contribute further to the problem, surely? They're not really part of the tree - they are limited by pDrop and by the number of peers on the node, they don't fan out further so they won't cause the total to become infinite. > > > If we get a RouteNotFound, we evaluate pDrop again and continue. > > So we move on without adjusting the probability? I'm not sure that's a > good idea - an RNF visits at least one new node, so it should be counted. Killing the request (95% of the time) on an RNF means small darknets will perform *very* badly. IMHO it is important that small networks be functional, especially if we are expecting them to be merged into larger networks eventually. If we don't retry on DNF, clearly it is sensible to retry on DNF: we will have an average visited nodes count of 1/pDrop. Which is finite. The RNF sub-tree is not really part of the DNF tree. The DNF tree has every possibility of becoming infinite. However, if we send a request to a node, and it returns an RNF, we know that it ran out of nodes - it's a dead end. AFAICS it provides only a multiplicative effect on the total number of nodes in the tree. This is a bit vague, but torpedoing performance on even vaguely bad topologies, and especially on very small networks, is a very bad thing IMHO. > > > If we get a > > DataNotFound, we drop the request unless random(1.0) < pDrop. We move on to > > the next node, and if that also gives us a DataNotFound, we do the same. > > > > So the probability of trying at least one peer is 1-pDrop, the probability of > > trying at least two peers is pDrop*(1-pDrop), a third is > > pDrop^2*(1-pDrop) ... (if we assume that almost all requests we do give > > DNFs). > > Sounds good - I think we should so the same for RejectedOverload and > RNF, though. I was trying to understand your proposal, I'm not saying I support it. The other option - for DNFs to be fatal - is much simpler. > > Cheers, > Michael -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20080124/7bca9617/attachment.pgp>
