Hi all, I've been searching around for recent experiences storing larger than normal objects in memcached. Since I didn't find much, I'm hoping people will share their experiences.
I've installed 1.2.5 and edited the slabs.c file as follows: #define POWER_BLOCK 16777216 This has the effect (I believe!) of setting the max object size to 16MB, and it seems to work. Running with the -vv option shows that there is a nice distribution of slabs created up to 16MB, and memcached does work. So I'm optimistic. Now here are the questions. Have other people used this technique successfully? What sort of 'gotchas' might be waiting around the corner? Perhaps related, I am curious why the memcached protocol limits the max size to 1MB. Would it make sense to make the max slab size a command line option? I guess not that many people need to store large objects, or this would come up more often. In my case, I am running a web app in Rails that makes use of large hashtables. I realize there are other workarounds; eg, I could refactor so the data is stored in smaller chunks <1MB ( but that seems fairly arbitrary); or share the lookup tables between processes by dumping to a file. But, sharing via memcached seems more flexible and robust - assuming it doesn't blow up! I'm running on EC2 with ample memory, so the potential inefficiency of allocating large slabs is not currently a concern. So, in short, is setting a large slab size a reasonable thing to do, or am I making a big mistake? Thoughts appreciated. And huge thanks to the memcached contributors for such a valuable tool! Nick
