Hi All,

Working on designing a caching layer for my website and I wanted to get some
opinions from memcached users.  There are two issues I'm hashing through:
1) Level of granularity to cache data at
2) Version compatibility across software releases

The primary applications that would be using the cache are developed in Java
and utilize a smalish (~20 classes) domain object model.  In a few use-cases
as you could imagine, we only need a few attributes from 2 or 3 different
domain objects to service a request.

How granular is the data that folks are typically putting into memcached?
Since there is support for batched gets, it would seem like one option at
the farthest end of the spectrum would be to cache each attribute
separately.  I could see there being a lot of overhead on puts in this case
and it's probably not so efficient overall.  The other end of the spectrum
would be to cache one object that references all of the other related data,
often reading more data then we need to from the cache.

The last consideration I'm thinking through in all of this is how to manage
serializable class versioning.  Do ppl generally take an optimistic approach
here and if there is a serialization exception on read, just replace what's
in the cache?  Or do you include a class version indicator as part of the
key?  If it's part of the key, how do you make sure that there aren't two
live versions with potentially different attribute values in the cache.

Thanks for your thoughts,
---Marc

Reply via email to