On 10/31/07, erik quanstrom <[EMAIL PROTECTED]> wrote:
> > If the file is "decent", the cache must still check out
> > that the file is up to date. It might not do so all the times
> > (as we do, to trade for performance, as you say). That means
> > that the cache would get further file data as soon as it sees
> > a new qid.vers for the file. And tail -f would still work.
>
> the problem is that no files are "decent" as long as concurrent
> access is allowed.  "control" files have the decency at least
> to behave the same way all the time.
>

Sure - however, there is a case for loose caches as well. For example,
lots of remote file data is essentially read-only, or at the very
worst its updated very infrequently.  Brucee had
sessionfs, which although more specialized (I'm going to oversimplify
here Brzr, so don't shoot me), could essentially be thought of as
serving a snapshot of the system.  You could cache to your hearts
content because you'd always be reading from the same snapshot.  If
you ever wanted to roll the snapshot forward, you could blow away the
cache -- or for optimum safety, restart the entire node.  With such a
mechanism you could even keep the cache around on disk for long
periods of time (as long as the session was still exported by the file
server).

        -eric

Reply via email to