Thus spake Tim Kientzle <[EMAIL PROTECTED]>:
> Sean Hamilton proposes:
> 
> >Wouldn't it seem logical to have [randomized disk cache expiration] in
> >place at all times?
> 
> Terry Lambert responds:
> 
> >>:I really dislike the idea of random expiration; I don't understand
> >>:the point, unless you are trying to get better numbers on some
> 
> >>:benchmark.
> 
> Matt Dillon concedes:
> 
> >>... it's only useful when you are cycling through a [large] data set ...
> 
> Cycling through large data sets is not really that uncommon.
> I do something like the following pretty regularly:
>    find /usr/src -type f | xargs grep function_name
> 
> Even scanning through a large dataset once can really hurt
> competing applications on the same machine by flushing
> their data from the cache for no gain.  I think this
> is where randomized expiration might really win, by reducing the
> penalty for disk-cache-friendly applications who are competing
> with disk-cache-unfriendly applications.
> 
> There's an extensive literature on randomized algorithms.
> Although I'm certainly no expert, I understand that such
> algorithms work very well in exactly this sort of application,
> since they "usually" avoid worst-case behavior under a broad
> variety of inputs.  The current cache is, in essence,
> tuned specifically to work badly on a system where applications
> are scanning through large amounts of data.  No matter what
> deterministic caching algorithm you use, you're choosing
> to behave badly under some situation.

Yes, if you randomly vary the behavior of the algorithm, you can
guarantee that on average, performance will never be too bad for
any particular input, but it will never be very good, on average,
either.

You can't mathematically prove everything about the memory access
patterns of real-world programs, but LRU seems to do pretty well
in a variety of situations.  It does, however, have its worst
cases.  A random replacement algorithm is very unlikely to do *as*
badly as LRU in LRU's worst case; its performance is consistent
and relatively poor.  Keep in mind that there's a bias in
most real-world programs that favors LRU more than randomness.

So should FreeBSD make it possible to ask for random replacement?
Probably, since it would be helpful for those times when you
*know* that LRU isn't going to do the right thing.  (In the
sequential read special case the OP mentions, random replacement
is better than LRU, but still worse than a deterministic algorithm
that just caches the prefix of the file that will fit in memory.
So in this situation we could do even better in theory.)

Should randomness be part of the default replacement algorithm, as
the OP suggests?  Probably not, since that would be pessimizing
performance in the common case for the sake of improving it in an
uncommon case.

Should the system be able to detect cases where the default
replacement algorithm is failing and dynamically modify its
behavior?  I think that would be really cool, if it is possible...

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to