On Wednesday 17 March 2010 18:33:01 Evan Daniel wrote: > Currently, when we need to drop an opennet connection, we use the LRU > (least recently used) policy: we drop the connection that has least > recently served a successful request. Unfortunately, I can't find a > link to the paper that describes why LRU was chosen, though the basic > idea is obvious: if the node has too many peers around a certain > location, then those peers will get fewer requests, and therefore one > of them is more likely to be the peer dropped. > > I'm curious whether other algorithms were explored, and if so how they > performed in comparison. The obvious candidates are LFU (least > frequently used), LFU-Window (LFU within a finite time window), and > LFU-aging (LFU but with periodic reduction in the frequency counts, so > that formerly useful nodes that are no longer useful eventually age > out). > > LFU-aging is normally described as having discrete aging steps at > (somewhat) large intervals, because it is normally discussed in the > context of a large cache where the O(n) time to perform the aging > computation is problematic. However, we could do continuous > exponential aging at every request without any problems, because the > number of connections is small. That is, every time a request > succeeds, we first multiply the score for each connection by some > constant alpha (0 < alpha <= 1), and then increment the score for the > connection that had the success. (For 0 < alpha <= 0.5 this is > precisely equivalent to LRU; for alpha = 1, it is precisely equivalent > to (non-window) LFU.) > > LFU-aging with decay on every request seems (intuitively, and we know > what I think of that...) like it should be a good match. If the > difference in usefulness between different nodes is moderate, and the > requests are presumed to be an essentially memoryless random process > (a very good assumption, imho), the simple LRU is overly likely to > drop one of the more useful connections just because it has been > unlucky recently. By using LFU with aging (or LFU-window) we get a > sort of running average behavior: a connection that is useful, but has > been unlucky (and not received any requests recently), will score > better than one that is mostly useless but happened to get a good > request very recently. > > Has this idea (or one like it) been investigated before? > > Thoughts? Is this worth study? > > (Submitted to the bug tracker as 3985.)
Oskar's theoretical work on opennet used LRU and got very good results, which were *almost* backed by provable math, and well established in simulation. We'd have to talk to our deep theorists if we wanted to change this. Obviously when I say LRU, I mean least-recently-succeeded-CHK-request, and subject to various heuristics (such as which peers are in grace periods etc). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20100331/94f1b0c4/attachment.pgp>
