I agree. You should seriously consider the method that you are using.
The MemCacheService is a component designed to increase (mostly read)
access to the Datastore. Users of this service should not rely upon
the presence of data in the MemCache, but should use the Datastore and
its transaction
Hi,
I think the following should work with the following caveats:
- there's a small danger that if the server blows up before it
releases the lock, that no threads would be able to update your cache,
but since it's only a cache of data. This is easily mitigated by
introducing a task which
On Dec 15, 8:23 am, Simon qila...@gmail.com wrote:
since I'm
assuming it takes x-amount of time for the changes in MemCache to
propagate throughout the instances. I may be wrong in this assumption
however.
I don't think cache data is propagated to the various app instances.
It resides in its
I considered the increment decrement approach, but it creates a race
condition with the counter. Having many different threads/tasks
trying to grab the lock at the same time could increment the number
above one, even when no one has the lock because there is still a
period of time between the
I guess it depends on how much throughput you're expecting through the
bit of the system which requires the lock - I agree that if there is
huge contention then this isn't the way to go, although I'd argue that
you should be changing your design anyway since synchronizing across a
distributed
On Dec 15, 10:50 am, Simon qila...@gmail.com wrote:
although I'd argue that
you should be changing your design anyway since synchronizing across a
distributed architecture for a lot of threads is just unadvisable.
No doubt. I was simply commenting on the method in question.
--
You received
The synchronized block won't work at all, since it's not guaranteed
that you only have a single instance of your application running at
any point in time. As soon as you have multiple instances then you
will be synchronizing in different JVMs and hence you'll get multiple
threads accessing the
I was thinking about that (increment/decrement)...
But, first, if the tasks read the value of the variable that needs to be
incremented, isn't there a posibility that all tasks will read the same
value at the same time and then increment it in the same time ?
On 14 December 2010 11:59, Simon
Except return lockIndex should just be return true.
--
You received this message because you are subscribed to the Google Groups
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to
H, super...
I was thinking of smth like (val mod 2) == 0/1 (true/false), but the lock
reading wasn't safe.
Thanks!
On 14 December 2010 20:57, Jay Young jaydevfollo...@gmail.com wrote:
Except return lockIndex should just be return true.
--
You received this message because you are
Grr. Just noticed another issue. The cache item that you get the
index is not the same item as the lock, so really you need two
different cache items, one to make sure you have a unique value for
the lock, and the lock itself. The code above is correct, you just
need different keys for the
Maybe there's something that i don't know very well about the memcache
service and this problem is simpler then i thing it is... If so,
please advice :)
On Dec 13, 6:38 pm, Ice13ill andrei.fifi...@gmail.com wrote:
I need advice for implementing concurrent access to a memcache
variable, when
You can use the standard Java synchronization to synchronize the task
threads, e.g.
synchronized (YourMemcachedDataClass.class) {
YourMemcachedDataClass cachedData =
(YourMemcachedDataClass)cache.get(CACHE_KEY);
if (cachedData == null)
cachedData = new YourMemcachedDataClass ();
13 matches
Mail list logo