> Basically, I'm trying to understand when to use Cache::Cache, vs. Berkeley
> DB, and locking issues.  (Perrin, I've been curious why at etoys you used
> Berkeley DB over other caching options, such as Cache::Cache).

Cache::Cache didn't exist at the time.  BerkeleyDB seemed easier than
rolling our own file cache, but in retrospect I'm not sure it was the right
way to go.  We spent a fair amount of time figuring out how to work with
BerkeleyDB effectively.

> 1) use Storable and write the files out myself.
> 2) use Cache::FileCache and have the work done (but can I traverse?)
> 3) use Berkeley DB (I understand the issues discussed in The Guide)

If you do use BerkeleyDB, I suggest you just use the simple database-level
lock.  Otherwise, you have to think about deadlocks and I found the deadlock
daemon that comes with it kind of difficult to use.

> So, what kind of questions and answers would help be weigh the options?

If you have too many records, I suppose Cache::FileCache might eat up all
your inodes.  You can probably go pretty far with a modern OS before you hit
a problem with this.

I always wanted to benchmark Cache::FileCache against BerkeleyDB for
different read/write loads and see how they do.  I think FileCache would win
on Linux.

> With regard to locking, IIRC, Cache::Cache doesn't lock, rather writes go
> to a temp file, then there's an atomic rename.  Last in wins.  If updates
> to a record are not based on previous content (such as a counter file) is
> there any reason this is not a perfectly good method -- as opposed to
flock?

That should be fine.

- Perrin

Reply via email to