Re: Some set() calls taking way too long

2012-07-17 Thread David Morel

On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:

Re #2 - when more objects are stored in the cache, the hit ratio 
should be
higher--> the application might run faster, i.e. with less DB 
accesses.


no relation here, I was really mentioning bare set() calls timing, 
regardless of anything else.


Anyway, it sounds to me more related to slabs allocation, as only 
reset

solves it (not flush) if I understood u well.


all the slabs are already allocated, with a moderate rate of evictions. 
so it's not slab
allocation either, it's just one key now and then, regardless (it seems) 
of the key

itself or the size of the object.


Does it happen on any object size or on specific object size range ?


anything, really. puzzling, hey?

thanks!

On Tue, Jul 17, 2012 at 4:04 PM, David Morel 
wrote:





On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:


Few things that may help understanding your problem:

1. What is the status of your slabs allocation, is there enough room 
to

all slabes ?



This happens when the memory gets close to full. however there is not 
a

large number of evictions.
I would expect evictions to be made whenever needed, but not the 
process

of making room for 1 object to take half a second.


2. Do you see increase in the requests rate when your Memcached 
memory is

becoming full with objects ?



I don't think so, why would that be the case, it's application 
dependent,

not server, right?



3. How many threads are configured ?



the default 4





On Tue, Jul 17, 2012 at 1:11 PM, David Morel 
wrote:



hi memcached users/devvers,

I'm seeing occasional slowdowns (tens of milliseconds) in setting 
some
keys on some big servers (80GB RAM allocated to memcached) which 
contain
a large number of keys (many millions). The current version I use 
is

1.4.6 on RH6.

The thing is once I bounce the service (restart, not flush_all),
everything becomes fine again. So could a large number of keys be 
the

source of the issue (some memory allocation slowdown or something)?

I don't see that many evictions on the box, and anyway, evicting an
object to make room for another shouldn't take long, should it? Is 
there
a remote possibility the large number of keys is at fault and 
splitting
the daemons, like 2 or more instances per box, would fix it? Or is 
that

a known issue fixed in a later release?

Thanks for any insight.

David Morel





--
Yiftach Shoolman
+972-54-7634621






--
Yiftach Shoolman
+972-54-7634621



David Morel
--
Booking.com <http://www.booking.com/>
Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
phone:+33 4 20 10 26 63
gsm:+33 6 80 38 56 83


Re: Some set() calls taking way too long

2012-07-17 Thread David Morel


On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:
>
> Few things that may help understanding your problem:
>
> 1. What is the status of your slabs allocation, is there enough room to 
> all slabes ?
>

This happens when the memory gets close to full. however there is not a 
large number of evictions.
I would expect evictions to be made whenever needed, but not the process of 
making room for 1 object to take half a second.
 

> 2. Do you see increase in the requests rate when your Memcached memory is 
> becoming full with objects ?
>

I don't think so, why would that be the case, it's application dependent, 
not server, right?
 

> 3. How many threads are configured ?
>

the default 4
 

>
>
> On Tue, Jul 17, 2012 at 1:11 PM, David Morel wrote:
>
>> hi memcached users/devvers,
>>
>> I'm seeing occasional slowdowns (tens of milliseconds) in setting some
>> keys on some big servers (80GB RAM allocated to memcached) which contain
>> a large number of keys (many millions). The current version I use is
>> 1.4.6 on RH6.
>>
>> The thing is once I bounce the service (restart, not flush_all),
>> everything becomes fine again. So could a large number of keys be the
>> source of the issue (some memory allocation slowdown or something)?
>>
>> I don't see that many evictions on the box, and anyway, evicting an
>> object to make room for another shouldn't take long, should it? Is there
>> a remote possibility the large number of keys is at fault and splitting
>> the daemons, like 2 or more instances per box, would fix it? Or is that
>> a known issue fixed in a later release?
>>
>> Thanks for any insight.
>>
>> David Morel
>>
>
>
>
> -- 
> Yiftach Shoolman
> +972-54-7634621
>  


Some set() calls taking way too long

2012-07-17 Thread David Morel
hi memcached users/devvers,

I'm seeing occasional slowdowns (tens of milliseconds) in setting some
keys on some big servers (80GB RAM allocated to memcached) which contain
a large number of keys (many millions). The current version I use is
1.4.6 on RH6.

The thing is once I bounce the service (restart, not flush_all),
everything becomes fine again. So could a large number of keys be the
source of the issue (some memory allocation slowdown or something)?

I don't see that many evictions on the box, and anyway, evicting an
object to make room for another shouldn't take long, should it? Is there
a remote possibility the large number of keys is at fault and splitting
the daemons, like 2 or more instances per box, would fix it? Or is that
a known issue fixed in a later release?

Thanks for any insight.

David Morel


Re: Get multi error - too many keys returned

2010-07-21 Thread David Morel
On 21 juil, 10:59, Guille -bisho-  wrote:
> On Wed, Jul 21, 2010 at 10:37, dormando  wrote:
> >> I'm having an issue with my memcached farm. I'm using 3 memcached
> >> servers (v1.2.6) and python-memcached client (v1.43).
>
> >> When using get_multi function, memcached returns sometimes keys that I
> >> wasn't asking for. Memcached can return less data than expected (if
> >> key-value does not exist), but documentation says nothing about
> >> returning more data than it should. What can cause this?
>
> We have experienced similar behaviours with php memcache under heavy
> load. Sometimes the clients seems to mix requests from different
> replies.

This happens when your client forks and forgets to reopen a
connection, but re-uses the one that was open before the fork.
As a results, several clients use the same socket which ends up
interleaving the traffic.
This is completely evil, not about memcache, really (it will happen
just the same with any network protocol), and is solved by
disconnecting and reconnecting.
Also, it is only visible under heavy load because this is when you
have a chance of seeing actually concurrent requests despite the very
short roundtrip of a typical memcached query.
Solutions: #1 make very sure you open the connection after the fork,
or #2 have a simple check in your memcached layer that checks the PID
hasn't changed. If it has, do a disconnect_all() or whatever it's
called in your language, and then run your queries. There is some
overhead when you adopt solution #2, but #1 is not always applicable
if you have spaghetti code and/or lax coding policies.

David


Re: how to run the two memcache in same windows xp machine

2010-03-26 Thread David Morel
On 25 mar, 06:44, Senthil Raja  wrote:

> I am not getting new services, while I ran the following command.
>
> sc create memcached1 binPath="c:\memecached\memcached.exe -d runservice -p
> 11311" start=auto DisplayName="memcached server 2"
>
> please help me.
>

c:\memecached\memcached.exe ... your directory is memEcached, typo ?

David

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
"REMOVE ME" as the subject.