> In general the Cache::* modules were designed for clarity and ease of
> use in mind.  For example, the modules tend to require absolutely no
> set-up work on the end user's part and try to be as fail-safe as
> possible.  Thus there is run-time overhead involved.  That said, I'm
> certainly not against performance.  :) These benchmarks are going to
> be tremendously useful in identifying bottlenecks.  However, I won't
> be able to optimize for these particular benchmarks, as Cache::Cache
> is designed to do something different than straight gets and sets.
>
> Again, thank you, Rob.  This is great,

That's a good point. I probably should have added the features that each one
can do to help with decisions. Cache::Cache does have the most options with
regard to limiting time/size in the cache, so that could be a be factor in
someones choice.

> * Cache::Mmap (uses Storable)
- Can indirectly specify the maximum cache size, though purges are uneven
depending on how well data hashes into different buckets
- Has callback ability on a read/purge so you can move any purged data to a
different data store if you want, and automatically retrieve it on next
retrieve when it's not in the cache

> * Cache::FileCache (uses Storable)
> * Cache::SharedMemoryCache (uses Storable)
- Can specify the maximum cache size (Cache::SizeAwareFileCache) and/or
maximum time an object is allowed in the cache
- Follows the Cache::Cache interface system

> * DBI (I used InnoDB), use Storable, always do 'delete' then 'insert'
> * DBI, use Storable, do 'select' then 'insert' or 'update'
- Can't specifiy any limits directly
- Could add a 'size' and 'timestamp' column to each row and use a daemon to
iterate through and cleanup based on time and size

> * MLDBM::Sync::SDBM_File (uses Storable)

> * IPC::MM
- Can't specifiy any limits directly
- Could create secondary tied db/mm hash with key -> [ size, timestamp ]
mapping and use daemon to iterate through and cleanup based on time and size

Rob


Reply via email to