Please try with a memcached instance set to 64 megabytes or higher. It
won't properly allocate all of the slab instances if memory values are
much lower than the defaults. Causes all sorts of weirdness.
Try ramping up the RAM, try what brian moon said, and if you still see
this behavior please submit a script to the ML to repeat it, for
verification.
Thanks!
-Dormando
Mathieu Viel wrote:
Hi guys. I have a question about the cache structure.
I want to use memcached to store some frequently used data chuncks and
the idea is to always have in memcached the most frequent requested
data; if i insert some new data when the memory for memcached is fully
used, i want the least requested data to be replaced.
I noticed in the FAQ that the cache structure is an LRU so i thought
"great i'll have almost nothing to do to implement this!" but, in order
to make sure, i ran some tests and i am dissapointed about the results.
With a 1Mb memcached, i first set an item with "1_1_1" as key and "1" as
value; i then get the value of this item (which is working ^^) and then
set some new randoms items i will never retrieve in order to fill the
memory (i made sure the key value will never be "1_1_1" for these
items); i then get the value of my "1_1_1" item to check if it's still
there. This "simple" process is working fine but when the memory is
filled, after some time, i can't get the value of my "1_1_1" item
anymore :S
I tried to check in verbose mode (-vv) and i can't get any interesting
information about this "issue".
I thought that since i frequently get the value of this item, it would
not make it the LRU one so its would never be somehow erased. Am i
wrong? I know about the different slabs sizes but since i always set the
value to "1", i think all these items are in the same category.
Thanks for help!
--
Vivi