Re: Some set() calls taking way too long

2012-07-17 Thread Yiftach Shoolman
Few things that may help understanding your problem:

1. What is the status of your slabs allocation, is there enough room to all
slabes ?
2. Do you see increase in the requests rate when your Memcached memory is
becoming full with objects ?
3. How many threads are configured ?


On Tue, Jul 17, 2012 at 1:11 PM, David Morel david.mo...@amakuru.netwrote:

 hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is there
 a remote possibility the large number of keys is at fault and splitting
 the daemons, like 2 or more instances per box, would fix it? Or is that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel




-- 
Yiftach Shoolman
+972-54-7634621


Re: Some set() calls taking way too long

2012-07-17 Thread David Morel


On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:

 Few things that may help understanding your problem:

 1. What is the status of your slabs allocation, is there enough room to 
 all slabes ?


This happens when the memory gets close to full. however there is not a 
large number of evictions.
I would expect evictions to be made whenever needed, but not the process of 
making room for 1 object to take half a second.
 

 2. Do you see increase in the requests rate when your Memcached memory is 
 becoming full with objects ?


I don't think so, why would that be the case, it's application dependent, 
not server, right?
 

 3. How many threads are configured ?


the default 4
 



 On Tue, Jul 17, 2012 at 1:11 PM, David Morel david.mo...@amakuru.netwrote:

 hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is there
 a remote possibility the large number of keys is at fault and splitting
 the daemons, like 2 or more instances per box, would fix it? Or is that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel




 -- 
 Yiftach Shoolman
 +972-54-7634621
  


Re: Some set() calls taking way too long

2012-07-17 Thread Yiftach Shoolman
Re #2 - when more objects are stored in the cache, the hit ratio should be
higher-- the application might run faster, i.e. with less DB accesses.

Anyway, it sounds to me more related to slabs allocation, as only reset
solves it (not flush) if I understood u well.

Does it happen on any object size or on specific object size range ?

On Tue, Jul 17, 2012 at 4:04 PM, David Morel david.mo...@amakuru.netwrote:



 On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:

 Few things that may help understanding your problem:

 1. What is the status of your slabs allocation, is there enough room to
 all slabes ?


 This happens when the memory gets close to full. however there is not a
 large number of evictions.
 I would expect evictions to be made whenever needed, but not the process
 of making room for 1 object to take half a second.


 2. Do you see increase in the requests rate when your Memcached memory is
 becoming full with objects ?


 I don't think so, why would that be the case, it's application dependent,
 not server, right?


 3. How many threads are configured ?


 the default 4




 On Tue, Jul 17, 2012 at 1:11 PM, David Morel david.mo...@amakuru.netwrote:

 hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is there
 a remote possibility the large number of keys is at fault and splitting
 the daemons, like 2 or more instances per box, would fix it? Or is that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel




 --
 Yiftach Shoolman
 +972-54-7634621




-- 
Yiftach Shoolman
+972-54-7634621


Re: Some set() calls taking way too long

2012-07-17 Thread David Morel

On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:

Re #2 - when more objects are stored in the cache, the hit ratio 
should be
higher-- the application might run faster, i.e. with less DB 
accesses.


no relation here, I was really mentioning bare set() calls timing, 
regardless of anything else.


Anyway, it sounds to me more related to slabs allocation, as only 
reset

solves it (not flush) if I understood u well.


all the slabs are already allocated, with a moderate rate of evictions. 
so it's not slab
allocation either, it's just one key now and then, regardless (it seems) 
of the key

itself or the size of the object.


Does it happen on any object size or on specific object size range ?


anything, really. puzzling, hey?

thanks!

On Tue, Jul 17, 2012 at 4:04 PM, David Morel 
david.mo...@amakuru.netwrote:





On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:


Few things that may help understanding your problem:

1. What is the status of your slabs allocation, is there enough room 
to

all slabes ?



This happens when the memory gets close to full. however there is not 
a

large number of evictions.
I would expect evictions to be made whenever needed, but not the 
process

of making room for 1 object to take half a second.


2. Do you see increase in the requests rate when your Memcached 
memory is

becoming full with objects ?



I don't think so, why would that be the case, it's application 
dependent,

not server, right?



3. How many threads are configured ?



the default 4





On Tue, Jul 17, 2012 at 1:11 PM, David Morel 
david.mo...@amakuru.netwrote:



hi memcached users/devvers,

I'm seeing occasional slowdowns (tens of milliseconds) in setting 
some
keys on some big servers (80GB RAM allocated to memcached) which 
contain
a large number of keys (many millions). The current version I use 
is

1.4.6 on RH6.

The thing is once I bounce the service (restart, not flush_all),
everything becomes fine again. So could a large number of keys be 
the

source of the issue (some memory allocation slowdown or something)?

I don't see that many evictions on the box, and anyway, evicting an
object to make room for another shouldn't take long, should it? Is 
there
a remote possibility the large number of keys is at fault and 
splitting
the daemons, like 2 or more instances per box, would fix it? Or is 
that

a known issue fixed in a later release?

Thanks for any insight.

David Morel





--
Yiftach Shoolman
+972-54-7634621






--
Yiftach Shoolman
+972-54-7634621



David Morel
--
Booking.com http://www.booking.com/
Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
phone:+33 4 20 10 26 63
gsm:+33 6 80 38 56 83


Re: Some set() calls taking way too long

2012-07-17 Thread Yiftach Shoolman
One more guess - could it be that you are limited by the network bandwidth
of your server ?

I came across many Memcached deployments that this was the case

On Tue, Jul 17, 2012 at 7:47 PM, David Morel david.mo...@amakuru.netwrote:

 On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:

  Re #2 - when more objects are stored in the cache, the hit ratio should be
 higher-- the application might run faster, i.e. with less DB accesses.


 no relation here, I was really mentioning bare set() calls timing,
 regardless of anything else.


  Anyway, it sounds to me more related to slabs allocation, as only reset
 solves it (not flush) if I understood u well.


 all the slabs are already allocated, with a moderate rate of evictions. so
 it's not slab
 allocation either, it's just one key now and then, regardless (it seems)
 of the key
 itself or the size of the object.


  Does it happen on any object size or on specific object size range ?


 anything, really. puzzling, hey?

 thanks!


  On Tue, Jul 17, 2012 at 4:04 PM, David Morel david.mo...@amakuru.net**
 wrote:



 On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:


 Few things that may help understanding your problem:

 1. What is the status of your slabs allocation, is there enough room to
 all slabes ?


 This happens when the memory gets close to full. however there is not a
 large number of evictions.
 I would expect evictions to be made whenever needed, but not the process
 of making room for 1 object to take half a second.


  2. Do you see increase in the requests rate when your Memcached memory
 is
 becoming full with objects ?


 I don't think so, why would that be the case, it's application dependent,
 not server, right?


  3. How many threads are configured ?


 the default 4




 On Tue, Jul 17, 2012 at 1:11 PM, David Morel david.mo...@amakuru.net*
 *wrote:

  hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which
 contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is
 there
 a remote possibility the large number of keys is at fault and splitting
 the daemons, like 2 or more instances per box, would fix it? Or is that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel




 --
 Yiftach Shoolman
 +972-54-7634621




 --
 Yiftach Shoolman
 +972-54-7634621



 David Morel
 --
 Booking.com http://www.booking.com/
 Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
 phone:+33 4 20 10 26 63
 gsm:+33 6 80 38 56 83




-- 
Yiftach Shoolman
+972-54-7634621


Re: Some set() calls taking way too long

2012-07-17 Thread dormando
Have you seen/gone through this: http://memcached.org/timeouts

checked the common stuff on the page (not swapping a little bit or under
memory pressure/etc), and used the independent connection tester tool to
validate that just sets instead of gets or new connections are being
problematic? Going through the motions there helps rule out a lot of
stuff.

It'd be great if you were on a newer version as well, but that may not
solve any problems here. With four threads, if a set blocks for 10
milliseconds due to a memcached related issue, 1/4th of all your gets
would also block for at least 10 milliseconds. Since you're not seeing
that I'm willing to bet it's something else? unless that's not true, in
which the testing above should confirm it.

Also need to know what client you're using, in case you're on one which
batches sets, or the bug is in the client.

On Tue, 17 Jul 2012, David Morel wrote:

 On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:

  Re #2 - when more objects are stored in the cache, the hit ratio should be
  higher-- the application might run faster, i.e. with less DB accesses.

 no relation here, I was really mentioning bare set() calls timing, regardless
 of anything else.

  Anyway, it sounds to me more related to slabs allocation, as only reset
  solves it (not flush) if I understood u well.

 all the slabs are already allocated, with a moderate rate of evictions. so
 it's not slab
 allocation either, it's just one key now and then, regardless (it seems) of
 the key
 itself or the size of the object.

  Does it happen on any object size or on specific object size range ?

 anything, really. puzzling, hey?

 thanks!

  On Tue, Jul 17, 2012 at 4:04 PM, David Morel david.mo...@amakuru.netwrote:
 
  
  
   On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:
   
Few things that may help understanding your problem:
   
1. What is the status of your slabs allocation, is there enough room to
all slabes ?
   
  
   This happens when the memory gets close to full. however there is not a
   large number of evictions.
   I would expect evictions to be made whenever needed, but not the process
   of making room for 1 object to take half a second.
  
  
2. Do you see increase in the requests rate when your Memcached memory
is
becoming full with objects ?
   
  
   I don't think so, why would that be the case, it's application dependent,
   not server, right?
  
  
3. How many threads are configured ?
   
  
   the default 4
  
  
   
   
On Tue, Jul 17, 2012 at 1:11 PM, David Morel
david.mo...@amakuru.netwrote:
   
 hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which
 contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is
 there
 a remote possibility the large number of keys is at fault and
 splitting
 the daemons, like 2 or more instances per box, would fix it? Or is
 that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel

   
   
   
--
Yiftach Shoolman
+972-54-7634621
   
  
 
 
  --
  Yiftach Shoolman
  +972-54-7634621


 David Morel
 --
 Booking.com http://www.booking.com/
 Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
 phone:+33 4 20 10 26 63
 gsm:+33 6 80 38 56 83