Implemented in unstable 6348.

On Fri, Nov 21, 2003 at 06:39:34PM +0000, Toad wrote:
> On Fri, Nov 21, 2003 at 12:10:26PM +0000, Ian Clarke wrote:
> > Ian Clarke wrote:
> > >It seems that we aren't seeing the hoped-for specialization in NGR. 
> > 
> > Just did some more thinking about this (motivated by the daunting task 
> > of resorting to a simulator ;)
> > 
> > In pre-NGR, we saw pretty good specialization.  Under that scheme, a 
> > node was more likely to route towards another node from which it had 
> > received a successful response.
> > 
> > Because of the way current NGR is set-up, the opposite is true.  If 
> > nodes default to over-optomistic (sometimes wildly over-optomistic) 
> > estimates, then whenever the node gets a response, its estimate around 
> > that key will get worse, making it less likely for the node to route to 
> > that node for keys close to the one that node retrieved.  In-effect, 
> > this is the *opposite* of the pre-NGR routing scheme - which, while 
> > imperfect, we know to have produced reasonably rapid specialization, and 
> > will remain-so until a signficant amount of data has been collected for 
> > every node.
> > 
> > Of course, the changes to allow nodes to share estimator information 
> > should reduce this problem, but my suspicion is that even with this the 
> > anti-specialization effect is strong enough to prevent the network from 
> > specializing at all.
> > 
> > The solution?  Try to make the degenerate case of NGR (ie. when we have 
> > very little data) behave like the pre-NGR routing scheme.  How?  Make 
> > estimators default to being very very pessimistic, rather than very very 
> > optomistic.  This way when a node does successfully retrieve data, it 
> > will almost always result in it being more likely to get routed to for 
> > keys close to the successfully retrieved key.
> > 
> > Clearly, the danger here is that new nodes would never get tested 
> > because their estimators are so dismally pessimistic.
> > 
> > There are several possible solutions to this, the obvious one being to 
> > force NGR to route to a sub-optimal node every so often.
> > 
> > A more refined approach may be to have "offline probing".  Basically we 
> > have a process which every-so-often randomly selects a node and requests 
> > a key from it.  The key can be one that was previously retrieved from 
> > another node.  This request is used to educate the estimators of nodes 
> > which may not otherwise ever have requests sent to them.
> > 
> > Thoughts?
> > 
> > Ian.
> 
> I have been advocating something similar for the last week or two;
> thanks for coming up with a better justification.
> 
> Implementation:
> 
> Node estimators start off pessimistic, but with an initial
> specialization.
> 
> Every 30 (tunable, lower on nodes not getting any traffic) seconds, we
> pick a random key from a table we keep of recently requested keys, and
> request it at HTL 25 from the node with the lowest number of reports of
> events in its estimator (choosing randomly if several nodes have 0).
> -- 
> Matthew J Toseland - [EMAIL PROTECTED]
> Freenet Project Official Codemonkey - http://freenetproject.org/
> ICTHUS - Nothing is impossible. Our Boss says so.



> _______________________________________________
> Devl mailing list
> [EMAIL PROTECTED]
> http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to