On 10/17/10 6:07 AM, Tobias wrote:
Is it ever possible that your compute takes longer than your timeout?
no, the return value of "memcache.delete("lock" + x) is true.
But wouldn't that also be true if another process found the expired lock and set
a new one?
--
Les Mikesell
lesmikes..
> Is it ever possible that your compute takes longer than your timeout?
no, the return value of "memcache.delete("lock" + x) is true.
Is it ever possible that your compute takes longer than your timeout?
On Fri, Oct 15, 2010 at 5:45 AM, Tobias wrote:
> > Can you give more info about exactly what the app is doing?
>
> Something like this:
>
> value = memcache.get("record" + x)
>
> if (false == value && cache.add("lock" + x, "1"
> Can you give more info about exactly what the app is doing?
Something like this:
value = memcache.get("record" + x)
if (false == value && cache.add("lock" + x, "1", 60)) {
compute (expensive) record
insert record with Primary key x Into DB
memcache.set("record" + x, record);
memca
> On 14 Okt., 10:31, dormando wrote:
> > > Yeah, right. :-) Restarting all memd instances is not an option. Can
> > > you explain, why it is not possible?
> >
> > Because we've programmed the commands with the full intent to be atomic.
> > If it's not, there's a bug... there's an issue with incr/d
On 14 Okt., 14:01, Dieter Schmidt wrote:
> What happens if the add cmd failes because of an unlikely network error?
The situation is: two different clients are doing an add() with the
same key at the same time. Both are getting true (assuming that this
key has to be on the same machine,
it has
What happens if the add cmd failes because of an unlikely network error?
elSchrom schrieb:
>Hi Diez,
>
>On 14 Okt., 11:39, Dieter Schmidt wrote:
>> For me it sounds like a configuration problem on the webservers or an
>> availability/accessability issue.
>> If for example all machines are acc
Hi Diez,
On 14 Okt., 11:39, Dieter Schmidt wrote:
> For me it sounds like a configuration problem on the webservers or an
> availability/accessability issue.
> If for example all machines are accessable the locking key resides on
> maschine x. If one of the servers webservers differers in cfg i
On 14 Okt., 10:31, dormando wrote:
> > Yeah, right. :-) Restarting all memd instances is not an option. Can
> > you explain, why it is not possible?
>
> Because we've programmed the commands with the full intent to be atomic.
> If it's not, there's a bug... there's an issue with incr/decr that's b
For me it sounds like a configuration problem on the webservers or an
availability/accessability issue.
If for example all machines are accessable the locking key resides on maschine
x. If one of the servers webservers differers in cfg it can happen that the key
is added a second time as new som
>
> Yeah, right. :-) Restarting all memd instances is not an option. Can
> you explain, why it is not possible?
Because we've programmed the commands with the full intent to be atomic.
If it's not, there's a bug... there's an issue with incr/decr that's been
fixed upstream but we've never had a re
On 14 Okt., 10:00, dormando wrote:
> > our 50+ consistent hashing cluster is very reliable on normal
> > operations, incr/decr, get, set, multiget, etc. is not a problem. If
> > we have a problem with keys on wrong servers in the continuum, we
> > should have more problems, which we currently ha
> our 50+ consistent hashing cluster is very reliable on normal
> operations, incr/decr, get, set, multiget, etc. is not a problem. If
> we have a problem with keys on wrong servers in the continuum, we
> should have more problems, which we currently have not.
> The cluster is always under relative
Thx for your replies so far. Failover is deactivated in our
configuration. This can not be the reason. I think I have to write a
little bit more
about the circumstances:
our 50+ consistent hashing cluster is very reliable on normal
operations, incr/decr, get, set, multiget, etc. is not a problem.
... or you couldd use a concatenation of ur server ID/timestamp/query/unique
client variable(s)/session etc.. (all hashed) as part of your (hashed)
key... there's countless ways to make ur key unique... even in ur
situation!!!
On 13 October 2010 19:11, Adam Lee wrote:
> Yeah, we also have us
Yeah, we also have used this as a sort of crude locking mechanism on a site
under fairly heavy load and have never seen any sort of inconsistency-- as
dormando said, I'd make sure your configuration is correct. Debug and make
sure that they're both indeed setting it on the same server. Or, if tha
> Hi everyone,
>
> we have the following situation: due to massive simultaneous inserts
> in mysql on possibly identical primary keys, we use the atomic
> memcache add() as a semaphore. In a few cases we observed the
> behaviour, that two simultaneous add() using the same key from
> different clien
Hi everyone,
we have the following situation: due to massive simultaneous inserts
in mysql on possibly identical primary keys, we use the atomic
memcache add() as a semaphore. In a few cases we observed the
behaviour, that two simultaneous add() using the same key from
different clients both retur
18 matches
Mail list logo