On Wed, Mar 31, 2010 at 6:22 PM, Matthew Toseland
<toad at amphibian.dyndns.org> wrote:
> On Wednesday 17 March 2010 18:33:01 Evan Daniel wrote:
>> Currently, when we need to drop an opennet connection, we use the LRU
>> (least recently used) policy: we drop the connection that has least
>> recently served a successful request. ?Unfortunately, I can't find a
>> link to the paper that describes why LRU was chosen, though the basic
>> idea is obvious: if the node has too many peers around a certain
>> location, then those peers will get fewer requests, and therefore one
>> of them is more likely to be the peer dropped.
>>
>> I'm curious whether other algorithms were explored, and if so how they
>> performed in comparison. ?The obvious candidates are LFU (least
>> frequently used), LFU-Window (LFU within a finite time window), and
>> LFU-aging (LFU but with periodic reduction in the frequency counts, so
>> that formerly useful nodes that are no longer useful eventually age
>> out).
>>
>> LFU-aging is normally described as having discrete aging steps at
>> (somewhat) large intervals, because it is normally discussed in the
>> context of a large cache where the O(n) time to perform the aging
>> computation is problematic. ?However, we could do continuous
>> exponential aging at every request without any problems, because the
>> number of connections is small. ?That is, every time a request
>> succeeds, we first multiply the score for each connection by some
>> constant alpha (0 < alpha <= 1), and then increment the score for the
>> connection that had the success. ?(For 0 < alpha <= 0.5 this is
>> precisely equivalent to LRU; for alpha = 1, it is precisely equivalent
>> to (non-window) LFU.)
>>
>> LFU-aging with decay on every request seems (intuitively, and we know
>> what I think of that...) like it should be a good match. ?If the
>> difference in usefulness between different nodes is moderate, and the
>> requests are presumed to be an essentially memoryless random process
>> (a very good assumption, imho), the simple LRU is overly likely to
>> drop one of the more useful connections just because it has been
>> unlucky recently. ?By using LFU with aging (or LFU-window) we get a
>> sort of running average behavior: a connection that is useful, but has
>> been unlucky (and not received any requests recently), will score
>> better than one that is mostly useless but happened to get a good
>> request very recently.
>>
>> Has this idea (or one like it) been investigated before?
>>
>> Thoughts? ?Is this worth study?
>>
>> (Submitted to the bug tracker as 3985.)
>
> Oskar's theoretical work on opennet used LRU and got very good results, which 
> were *almost* backed by provable math, and well established in simulation. 
> We'd have to talk to our deep theorists if we wanted to change this.
>
> Obviously when I say LRU, I mean least-recently-succeeded-CHK-request, and 
> subject to various heuristics (such as which peers are in grace periods etc).
>

As I recall the paper (which paper was it?  I can't find it, but I
probably just overlooked it or something), there was a strong
theoretical basis (proof? I don't recall) for believing that LRU
converged on an optimal distribution.

However, I don't believe that addresses rate of convergence, or
join-leave churn.  It's entirely possible that LFU or something would
also converge, and converge faster -- this would not be in conflict
with a statement that LRU converges on optimal.  Rate of convergence
is closely related to steady-state behavior on a network with a
constant (average) churn rate; we expect such a network to always be
close to optimal, but never optimal (because the recently joined nodes
haven't optimized, and other nodes haven't yet accommodated the recent
departures).  How close to optimal will depend on the churn rate, and
it's possible that something other than LRU produces a better
steady-state behavior.

Evan Daniel

Reply via email to