Memcache 1.4.13 from ubuntu 12.04 has slab calcification. Below are steps 
to reproduce it:

1. Start memcached with 256Mb cache size on TCP port 11213:

$ memcached -m 256 -p 11213 &


2. Fill memcached with 1M 256-byte objects using 
go-memcached-bench<https://github.com/valyala/ybc/tree/master/apps/go/memcached-bench>tool:

$ ./go-memcached-bench -valueSize=256 -itemsCount=1000000 
-workerMode=GetHit -serverAddrs=localhost:11213
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
itemsCount=[1000000]
keySize=[30]
maxPendingRequestsCount=[1024]
maxResponseTime=[20ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11213]
valueSize=[256]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done
Response time histogram
     0 -   2ms:   47.952% ############################
   2ms -   4ms:   33.843% ####################
   4ms -   6ms:   11.501% ######
   6ms -   8ms:    3.771% ##
   8ms -  10ms:    1.381% 
  10ms -  12ms:    0.606% 
  12ms -  14ms:    0.394% 
  14ms -  16ms:    0.175% 
  16ms -  18ms:    0.043% 
  18ms -1h0m0s:    0.335% 
Requests per second:     121428
Test duration:       5.756254795s
Avg response time:   2.933547ms
Min response time:     27.018us
Max response time:   211.166588ms
Cache miss count:        301028
Cache hit count:         698972
Cache miss ratio:        30.103%
Errors count:                 0


3. Try re-filling the cache with 100K 25-byte items. These items should 
easily fit the cache (25bytes*100K=2.5MB out of available 256MB):

$ ./go-memcached-bench -valueSize=25 -itemsCount=100000 -workerMode=GetHit 
-serverAddrs=localhost:11213
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
itemsCount=[100000]
keySize=[30]
maxPendingRequestsCount=[1024]
maxResponseTime=[20ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11213]
valueSize=[25]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done
Response time histogram
     0 -   2ms:   69.717% #########################################
   2ms -   4ms:   22.590% #############
   4ms -   6ms:    4.888% ##
   6ms -   8ms:    1.446% 
   8ms -  10ms:    0.735% 
  10ms -  12ms:    0.403% 
  12ms -  14ms:    0.080% 
  14ms -  16ms:    0.043% 
  16ms -  18ms:    0.029% 
  18ms -1h0m0s:    0.068% 
Requests per second:      19004
Test duration:       3.630707548s
Avg response time:   1.843934ms
Min response time:      31.49us
Max response time:   25.781297ms
Cache miss count:        931001
Cache hit count:          68999
Cache miss ratio:        93.100%
Errors count:                 0


Note cache miss ratio is amazing 93%. This means that memcached cannot 
overwrite most of 256-byte items by 25-byte items due to slab calcification.

It's possible avoiding slab calcification and 
go-memcached<https://github.com/valyala/ybc/tree/master/apps/go/memcached>proves
 this. Let's reproduce steps above with go-memcached:
1. Start go-memcached with 256Mb cache size on TCP port 11214:

$ ./go-memcached -cacheSize=256 -listenAddr=:11214 &


2. Fill the cache with 1M 256-byte items:

$ ./go-memcached-bench -valueSize=256 -itemsCount=1000000 
-workerMode=GetHit -serverAddrs=localhost:11214
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
itemsCount=[1000000]
keySize=[30]
maxPendingRequestsCount=[1024]
maxResponseTime=[20ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11214]
valueSize=[256]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done
Response time histogram
     0 -   2ms:   56.595% #################################
   2ms -   4ms:   34.046% ####################
   4ms -   6ms:    7.567% ####
   6ms -   8ms:    1.261% 
   8ms -  10ms:    0.334% 
  10ms -  12ms:    0.056% 
  12ms -  14ms:    0.013% 
  14ms -  16ms:    0.021% 
  16ms -  18ms:    0.024% 
  18ms -1h0m0s:    0.084% 
Requests per second:     166659
Test duration:       4.259556203s
Avg response time:   2.151518ms
Min response time:     28.528us
Max response time:   141.105123ms
Cache miss count:        290108
Cache hit count:         709892
Cache miss ratio:        29.011%
Errors count:                 0


3. Fill the cache with 100K 25-byte items:

$ ./go-memcached-bench -valueSize=25 -itemsCount=100000 -workerMode=GetHit 
-serverAddrs=localhost:11214
Config:
clientType=[new]
connectionsCount=[4]
getRatio=[0.900000]
goMaxProcs=[4]
itemsCount=[100000]
keySize=[30]
maxPendingRequestsCount=[1024]
maxResponseTime=[20ms]
osReadBufferSize=[229376]
osWriteBufferSize=[229376]
requestsCount=[1000000]
readBufferSize=[4096]
responseTimeHistogramSize=[10]
serverAddrs=[localhost:11214]
valueSize=[25]
workerMode=[GetHit]
workersCount=[512]
writeBufferSize=[4096]

Preparing...done
starting...done
Response time histogram
     0 -   2ms:   55.847% #################################
   2ms -   4ms:   32.723% ###################
   4ms -   6ms:    9.517% #####
   6ms -   8ms:    1.591% 
   8ms -  10ms:    0.275% 
  10ms -  12ms:    0.033% 
  12ms -  14ms:    0.006% 
  14ms -  16ms:    0.004% 
  16ms -  18ms:    0.004% 
  18ms -1h0m0s:    0.000% 
Requests per second:     229025
Test duration:       4.287611513s
Avg response time:   2.172225ms
Min response time:     24.516us
Max response time:   16.438077ms
Cache miss count:         18028
Cache hit count:         981972
Cache miss ratio:         1.803%
Errors count:                 0

1.8% cache miss ratio clearly shows that go-memcached is free from slab 
calcification. Because it doesn't use slabs at all :)
It's worth investigating and comparing other numbers returned by 
go-memcached-bench above for memcached and go-memcached. And, of course, 
testing other key-value caching and storage apps with memcache protocol 
support using go-memcached-bench.

On Monday, March 18, 2013 12:14:47 PM UTC+2, Chang Chen wrote:

> Hi Dormando
>
> Why I asked this question is that I noticed that redis solved 
> memory fragmentation by using *jemalloc*.(see 
> http://oldblog.antirez.com/post/everything-about-redis-24.html, and 
> search jemalloc).
>
> It make me consider using jemalloc in memcached for this issue.  Is it 
> possible? 
>
> I noticed debate between you and twitter guy, however, if this is 
> possible, slab calcification can be avoided, no significant 
> external fragmentation, and still has deterministic response time.
>
> Thanks
> Chang
>
>
> On Monday, March 18, 2013 4:50:19 PM UTC+8, Dormando wrote:
>>
>> Yes. It's had slab calcification for ten years. 
>>
>> It's also had a fix for this for over a year: 
>> http://code.google.com/p/memcached/wiki/ReleaseNotes1411 
>> ... this was further improved in later versions as well. 
>>
>> That twitter blog post is confusing and I am disappointed in all of you 
>> for not pointing out that this code exists in the main tree sooner. 
>>
>> On Sun, 17 Mar 2013, Chang Chen wrote: 
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to