FYI - we made the change on one server and it does appear to have resolved
premature key expiration.
Effectively what appears to have been happening was that every so often a
client was unable to connect to one or more of the memcached servers. When
this happened it changed the key distribution.
Or you could disable the failover feature...
On Tue, 6 Jul 2010, Darryl Kuhn wrote:
FYI - we made the change on one server and it does appear to have resolved
premature key expiration.
Effectively what appears to have been happening was that every so often a
client was unable to connect
Hi,
I just started playing with memcached. While doing very basic stuff I
found one thing that confused me a lot.
I have memcached running with default settings - 64M of memory for
caching.
1. Called flushALL to clean the cache.
2. insert 100 of byte arrays 512K each - this should consume about
Hi Sergei,
For various reasons (performance, avoiding memory fragmentation),
memcached uses a memory allocation approach called slab allocation. The
memcached flavor of it can be found here:
http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator
Chances are, your items didn't fit
Here's a more succinct and to the point page:
http://code.google.com/p/memcached/wiki/NewUserInternals
^ If your question isn't answered here ask for clarification and I'll
update the page.
Your problem is about the slab preallocation I guess.
On Tue, 6 Jul 2010, Matt Ingenthron wrote:
Hi
Sergei,
One more tidbit would be that doesn't appear in either of those links
(though I'm not sure it'd necessarily be super-appropriate in either)
that may throw off new users is that `flush`-based commands are
only invalidating objects, _not_ clearing the data store. The above
links should be
Just to pile on, test data that is all the same size like that is
probably a very bad test of memcached. Most likely, all your data is not
the exact same size.
Brian.
http://brian.moonspot.net/
On 7/6/10 5:36 PM, siroga wrote:
Hi,
I just started playing with memcached. While doing
Thanks, Brian,
I understand that. My goal here is to better understand possible limitations
and set expectations properly. Actually per what I saw in my tests (if the
second series of inserts will still be of 512K then all of them will be
stored successfully) I would conclude that if my data is
If your memory is very low (only 64m), its would work better the
smaller the chunks are, or slabs for big chunks will ocupy a lot of
memory. With gigs of RAM (typically people with dedicated memcaches
reserve 70-80% of total RAM) the slab allocation does not pose any
problem.
I agree that a flush