On Jan 15, 2007, at 3:04 PM, John ORourke wrote:
The thought process I've gone through corroborates this - at the
most complex point I was looking at specifying a caching policy for
each table - eg. (in a shop website) product info except stock
levels can be cached for a day, static pages can be cached for
ages, etc - but it always comes down to an efficient way of
invalidating cached records.
The DB based caching is a good compromise between something which
is simple enough to be fast and give a server-wide (process
independent) improvement. The only improvement would be adding
another daemon which abstracts the DB and understands the
application domain well enough to intelligently cache.
Personally, this is what I do using memcached ( using pg and mysql as
the db ):
I only cache certain 'objects'-- on FindMeOn.com & RoadSound , I
cache account records and user profiles.
The cache essentially works by overloading the load methods in my
classes:
$class->load( db=> read || write );
db eq read ?
if $self->load_memcached
return
$self->load_db
$self->save_memcached( 5mins )
return
db eq write
$self->load_db
$self->save_memcached( 5mins )
return
i think it works exceptionally well. if i'm dealing with anything
for the current user, i pull everything live off the write dbh. if
i'm dealing with the public or for syndication, i pull off memcached
or the read-db.
memcached offers auto-expiry on a timeframe, and the cache is built
up as its used. it cuts down dramatically on db hits.
sure, it would be nice if there were a way to intelligently cache
items that are more frequently requested longer and vice-versa -- but
then you need to run some rrd style profiling and a ton of other work
behind the scenes.
// Jonathan Vanasco