Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread elSchrom
Thx for your replies so far. Failover is deactivated in our
configuration. This can not be the reason. I think I have to write a
little bit more
about the circumstances:

our 50+ consistent hashing cluster is very reliable on normal
operations, incr/decr, get, set, multiget, etc. is not a problem. If
we have a problem with keys on wrong servers in the continuum, we
should have more problems, which we currently have not.
The cluster is always under relatively high load (the number of
connections for example is very high due to 160+ webservers in the
front). We are now expecting in a very few cases, that this
locking mechanism does not work. Two different clients try to lock the
with the same object (if you want to prevent multiple inserts in a
database on the same
primary key you have to explicitly set one key valid for all clients
and not a key with unique hashes in it), it works millions of times as
expected (we are generating a large number of user triggered database
inserts (~60/sec.)
with this construct). But a handful of locks does not work and shows
the behaviour described. So now my question is again: is it thinkable
(even if it is very implausible), that
a multithreaded memd does not provide 100% sure atomic add()?

Kind regards,

Jerome


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando
 our 50+ consistent hashing cluster is very reliable on normal
 operations, incr/decr, get, set, multiget, etc. is not a problem. If
 we have a problem with keys on wrong servers in the continuum, we
 should have more problems, which we currently have not.
 The cluster is always under relatively high load (the number of
 connections for example is very high due to 160+ webservers in the
 front). We are now expecting in a very few cases, that this
 locking mechanism does not work. Two different clients try to lock the
 with the same object (if you want to prevent multiple inserts in a
 database on the same
 primary key you have to explicitly set one key valid for all clients
 and not a key with unique hashes in it), it works millions of times as
 expected (we are generating a large number of user triggered database
 inserts (~60/sec.)
 with this construct). But a handful of locks does not work and shows
 the behaviour described. So now my question is again: is it thinkable
 (even if it is very implausible), that
 a multithreaded memd does not provide 100% sure atomic add()?

restart memcached with -t 1 and see if it stops happening. I already said
it's not possible.


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread elSchrom


On 14 Okt., 10:00, dormando dorma...@rydia.net wrote:
  our 50+ consistent hashing cluster is very reliable on normal
  operations, incr/decr, get, set, multiget, etc. is not a problem. If
  we have a problem with keys on wrong servers in the continuum, we
  should have more problems, which we currently have not.
  The cluster is always under relatively high load (the number of
  connections for example is very high due to 160+ webservers in the
  front). We are now expecting in a very few cases, that this
  locking mechanism does not work. Two different clients try to lock the
  with the same object (if you want to prevent multiple inserts in a
  database on the same
  primary key you have to explicitly set one key valid for all clients
  and not a key with unique hashes in it), it works millions of times as
  expected (we are generating a large number of user triggered database
  inserts (~60/sec.)
  with this construct). But a handful of locks does not work and shows
  the behaviour described. So now my question is again: is it thinkable
  (even if it is very implausible), that
  a multithreaded memd does not provide 100% sure atomic add()?

 restart memcached with -t 1 and see if it stops happening. I already said
 it's not possible.

Yeah, right. :-) Restarting all memd instances is not an option. Can
you explain, why it is not possible?


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando

 Yeah, right. :-) Restarting all memd instances is not an option. Can
 you explain, why it is not possible?

Because we've programmed the commands with the full intent to be atomic.
If it's not, there's a bug... there's an issue with incr/decr that's been
fixed upstream but we've never had a reported issue with add.

I'm not sure what you want to hear. They're supposed to be atomic, yes.
- that much is in the wiki too.


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread elSchrom
Hi Diez,

On 14 Okt., 11:39, Dieter Schmidt flatl...@stresstiming.de wrote:
 For me it sounds like a configuration problem on the webservers or an 
 availability/accessability issue.
 If for example all machines are accessable the locking key resides on 
 maschine x. If one of the servers webservers differers in cfg it can happen 
 that the key is added a second time as new somewhere else in the continuum. 
 As result you will have a second insert into your db.

 What do you think? Possible?


Possible for sure, but this should produce more problems like massive
redundant cached items, because some clients have a different type of
continuum. This is most likely not happening. The current failure rate
is smaller 0,0001% and they appear on different frontend-servers. It
feels like a very unlikely thing is happening here due to a massive
number of used add(), with a very rare number of failures.


 elSchrom jerome.p...@googlemail.com schrieb:



 On 14 Okt., 10:00, dormando dorma...@rydia.net wrote:
   our 50+ consistent hashing cluster is very reliable on normal
   operations, incr/decr, get, set, multiget, etc. is not a problem. If
   we have a problem with keys on wrong servers in the continuum, we
   should have more problems, which we currently have not.
   The cluster is always under relatively high load (the number of
   connections for example is very high due to 160+ webservers in the
   front). We are now expecting in a very few cases, that this
   locking mechanism does not work. Two different clients try to lock the
   with the same object (if you want to prevent multiple inserts in a
   database on the same
   primary key you have to explicitly set one key valid for all clients
   and not a key with unique hashes in it), it works millions of times as
   expected (we are generating a large number of user triggered database
   inserts (~60/sec.)
   with this construct). But a handful of locks does not work and shows
   the behaviour described. So now my question is again: is it thinkable
   (even if it is very implausible), that
   a multithreaded memd does not provide 100% sure atomic add()?

  restart memcached with -t 1 and see if it stops happening. I already said
  it's not possible.

 Yeah, right. :-) Restarting all memd instances is not an option. Can
 you explain, why it is not possible?


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread elSchrom


On 14 Okt., 14:01, Dieter Schmidt flatl...@stresstiming.de wrote:
 What happens if the add cmd failes because of an unlikely network error?

The situation is: two different clients are doing an add() with the
same key at the same time. Both are getting true (assuming that this
key has to be on the same machine,
it has to be an threading problem or a bug in add()). This breaks the
atomic behaviour, we are expecting. But we can not prove, that the key
is in that moment on
the same server, because it is highly volatile. It is just
speculation, because if keys are not stored correctly due to
consistent hashing problems, we should expect more problems.


 elSchrom jerome.p...@googlemail.com schrieb:

 Hi Diez,

 On 14 Okt., 11:39, Dieter Schmidt flatl...@stresstiming.de wrote:
  For me it sounds like a configuration problem on the webservers or an 
  availability/accessability issue.
  If for example all machines are accessable the locking key resides on 
  maschine x. If one of the servers webservers differers in cfg it can 
  happen that the key is added a second time as new somewhere else in the 
  continuum. As result you will have a second insert into your db.

  What do you think? Possible?

 Possible for sure, but this should produce more problems like massive
 redundant cached items, because some clients have a different type of
 continuum. This is most likely not happening. The current failure rate
 is smaller 0,0001% and they appear on different frontend-servers. It
 feels like a very unlikely thing is happening here due to a massive
 number of used add(), with a very rare number of failures.

  elSchrom jerome.p...@googlemail.com schrieb:

  On 14 Okt., 10:00, dormando dorma...@rydia.net wrote:
our 50+ consistent hashing cluster is very reliable on normal
operations, incr/decr, get, set, multiget, etc. is not a problem. If
we have a problem with keys on wrong servers in the continuum, we
should have more problems, which we currently have not.
The cluster is always under relatively high load (the number of
connections for example is very high due to 160+ webservers in the
front). We are now expecting in a very few cases, that this
locking mechanism does not work. Two different clients try to lock the
with the same object (if you want to prevent multiple inserts in a
database on the same
primary key you have to explicitly set one key valid for all clients
and not a key with unique hashes in it), it works millions of times as
expected (we are generating a large number of user triggered database
inserts (~60/sec.)
with this construct). But a handful of locks does not work and shows
the behaviour described. So now my question is again: is it thinkable
(even if it is very implausible), that
a multithreaded memd does not provide 100% sure atomic add()?

   restart memcached with -t 1 and see if it stops happening. I already 
   said
   it's not possible.

  Yeah, right. :-) Restarting all memd instances is not an option. Can
  you explain, why it is not possible?


1MB limit on object size

2010-10-14 Thread Sakuntala
Hi,

I am using
memcached server version 1.4.4 on Linux and
spy memcached client version 2.4.2

I performed a test by storing and retrieving objects of size 1.8MB and
3.6MB. I am able to retrieve the objects successfully without any
error. My load tests show some timeout exceptions when memcached
server is under heavy load, which is expected.

Memcached wiki website states that one of the reasons we should not be
using memcached is if the object size exceeds 1MB. I want to know if
this is a hard limit, or just that memcached is not efficient in that
scenario due to the storage model.

The reason for my question is that I have just one object(which is a
list) which might exceed 1MB in future, but currently, it is not
exceeding the limit. I considered storing it in several chunks and
using bulkGet() to retrieve the multiple small objects. The response
times are not efficient in case of bulkGet(). I should not be using
memcached for this object( and should look into other caching
mechanisms) but dont want to change the implementation till next
release. Please suggest.

Thanks,




Silicon Valley and Memcache

2010-10-14 Thread Dave Haverstick
We are recruiting for two Data Warehousing/Cloud Computing software
companies in the South Bay of Silicon Valley.  One is a block from the
Cal-train.

They are looking for Sr Software Engineers with a C++ and Memcached
background and Sr QA Engineers.

Please call me or suggest and we can discuss the specifics.

Thank you


Dave Haverstick
Senior Technical Recruiter

Office: 650-763-8758  x200
Email: da...@triadgroup.com
LinkedIn: www.linkedin.com/in/davetriadgroup



Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando
 On 14 Okt., 10:31, dormando dorma...@rydia.net wrote:
   Yeah, right. :-) Restarting all memd instances is not an option. Can
   you explain, why it is not possible?
 
  Because we've programmed the commands with the full intent to be atomic.
  If it's not, there's a bug... there's an issue with incr/decr that's been
  fixed upstream but we've never had a reported issue with add.
 
  I'm not sure what you want to hear. They're supposed to be atomic, yes.
  - that much is in the wiki too.

 I sure thought, that you designed memd to behave exactly the same with
 1 or many threads and it's good to hear, that there is no pending bug
 concerning
 atomicity of add() on multiple threads. The reason why someone posts
 such a think on the mailinglist is to hear, what the opinion of a dev
 is who has all the insight. :-)
 So please understand my obstinately behaviour.
 We are planning to run some tests concerning this behaviour, maybe I
 can provide more detail in the future. But it will be hard to find
 proof for a bug in this scenario. For that we have to build
 a test scenario, with multiple instances trying to make an add() on
 the same key on the exact same time on a consistent hashing cluster.

Can you give more info about exactly what the app is doing? What version
you're on as well? I can squint at it again and see if there's some minute
case.

Need to know exactly what you're doing though. How long the key tends to
live, how many processes are hammering the same key, what you're setting
the timeout to, etc.

Your behavior's only obstinate because you keep asking if we're sure if
it's atomic. Yes it's supposed to be atomic, if you think you've found a
bug lets talk about bug hunting :P