Clinton Gormley wrote:
I'd appreciate some feedback on my logic to optimise my cache (under mod_perl 1)

First, I'm assuming this is for a distributed system running on multiple servers. If not, you should just download one of the cache modules from CPAN. They're good.


I'm planning a two level cache :
    1) Live objects in each mod_perl process
    2) Serialised objects in a database

I suggest you use either Cache::Mmap or IPC::MM for your local cache. They are both very fast and will save you memory. Also, Cache::Mmap is only limited by the size of your disk, so you don't have to do any purging.


You seem to be taking a lot of care to ensure that everything always has the latest version of the data. If you can handle slightly out-of-date data, I would suggest that you simply keep objects in the local cache with a time-to-live (which Cache::Mmap or Cache::FileCache can do for you) and just look at the local version until it expires. You would end up building the objects once per server, but that isn't so bad.

If everything really does have to be 100% up-to-date, then what you're doing is reasonable. It would be nice to not do the step that checks for outdated objects before processing the request, but instead do it in a cleanup handler, although that could lead to stale data being used now and then.

If you were using a shared cache like Cache::Mmap, you could have a cron job or a separate Perl daemon that simply purges outdated objects every minute or so, and leave that out of your mod_perl code completely.

Yet another way to handle a distributed cache is to have each write to the cache send updates to the other caches using something like Spread::Queue. This is a bit more complex, but it means you don't need a second-tier in your cache to share updates.

- Perrin



Reply via email to