> Thanks Matt.  I did look a bit through the 'stats slabs' command, but
> perhaps I'm not interpreting it correctly.  In the most basic test,
> when I put in a 100 byte object, I'm seeing a slab being created with
> a chunk size of 176.  However, 'stats sizes' shows me one item of
> '192'.  So there's part of my confusion...am I using 176 bytes for the
> object or 192?
>
> The second part of my confusion is the ability to actually see that
> 100 byte object.  If instead of 100 bytes, I use 150, I'm not seeing
> any difference in the output of 'stats slabs' or 'stats sizes'.
> Obviously I can do these contrived tests and know what it is I'm
> putting into the cache, but I'm concerned that when it moves into a
> production setting I won't know the exact size of all the objects in
> the cache.  I'm using server version 1.2.8 at the moment.
>
> Am I reading these stats incorrectly?
>
> Any detailed help would be really appreciated.
>
> Thanks so much.

An item size is the value length + the key length + pointers + bytes to
store the length + CAS header + a couple terminators/misc things. I don't
have the exact item overhead offhand but will look it up and put it in the
wiki.

You can easily calculate your memory overhead on a slab:

STAT 3:chunk_size 152
STAT 3:chunks_per_page 6898
STAT 3:total_pages 291
STAT 3:total_chunks 2007318
STAT 3:used_chunks 2007310
STAT 3:free_chunks 8
STAT 3:free_chunks_end 0
STAT 3:mem_requested 271013713

chunk_size * chunks_per_page is the amount of bytes in a page for this
slab class, which is 1048496 here.

* 291 pages == 305112336 bytes allocated in the slab.

mem_requested is a shorthand that states the amount of memory actual items
(the total length, value + key + misc) take up within a slab.

271013713 / 305112336
        ~0.89% rounded.

So I've lost 11% memory overhead in this slab on top of the ~30 bytes per
item. used_chunks * the standard overhead will give you most of the rest
of the memory overhead. So it's probably closer to 60 megabytes, total?

'stats sizes' will throw items into the nearest "slab" if everything were
cut by 64 byte slabs. The rounding is probably putting it into the 192
byte bucket for you. If your item goes into the 172 byte slab, you're
definitely using 172 bytes or less.

The idea is that we trade off some memory overhead for consistent speed in
O(1) operations. We know ways to improve the efficiency and will be doing
so over the next few months, but I wouldn't say this is horrific at all.
Remove some headers, switch back to malloc or jemalloc/etc and you lose
consistent performance.

The overhead is most pronounced for small keys as well. Consider reducing
your key size, disabling CAS (-C) if you never use it (8 bytes per item),
or reducing the slab growth factor to close down the overhead.

As soon as I get a chance I'm adding some more modes to damemtop so folks
can more easily see slab overhead... The mem_requested stat trond added
and the pointer_size stat lets us trivially calculate overhead given that
you already understand that a stored "value" is actually "key + value" for
length, not just the value you're storing.

I'll throw out a side pointer here actually; This sort of knowledge is why
it's nice that memcached can store 0 byte values. If your client allows
you, you can store bits in the flags section, otherwise the existence of
the key itself may be enough data for some things you store.

-Dormando

Reply via email to