Hi Kim,

Are you able to send us the code you use for step 3? And are you certain
nothing is changing the memcache concurrently with step 3?

On Tue, Jun 23, 2009 at 1:44 PM, Kim Riber <kimsteenri...@gmail.com> wrote:

>
> Hi Nick
>
> I run the test in 3 steps (with half a minute in between):
> 1. Heavy load process to spawn appinstances
> 2. Write a lot of random values to the same key
> 3. Read the key from multiple threads.
> I can repeat 3rd step, and still get the same result (mostly 2
> different values)
>
> It seems like I hit 2 different memcache servers with different view
> on what is in the cache at that key
>
> >Also, are you running this against the dev_appserver, or in production?
> How do I see that?
> We are running towards .appspot.com


If you are testing against the local development server, that's the
dev_appserver. If you're testing code running on appspot.com, that's
production.

-Nick Johnson


> <http://appspot.com>
>
> -Kim
>
> On Jun 23, 12:39 pm, "Nick Johnson (Google)" <nick.john...@google.com>
> wrote:
> > Hi Kim,
> >
> > It's not clear from your description exactly how you're performing your
> > tests. Without extra information, the most likely explanation would be
> that
> > you're seeing a race condition in your code, where the key is modified
> > between subsequent requests to the memcache API.
> >
> > Also, are you running this against the dev_appserver, or in production?
> >
> > -Nick Johnson
> >
> >
> >
> > On Tue, Jun 23, 2009 at 7:18 AM, Kim Riber <kimsteenri...@gmail.com>
> wrote:
> >
> > > Just made another test, to confirm the behavior I see.
> > > This example is much simpler, and simply has 10 threads writing random
> > > values to memcahce to the same key.
> > > I would expect the last value written to be the one left in memcache.
> > > When afterwards, having 4 threads reading 10 times from that same key,
> > > they return 2 different values.
> > > This only happens if I prior to the writing threads, run some heavy
> > > tasks, to force gae to spawn more app instances.
> > > It seems like each server cluster might have its own memcache,
> > > independant from each other. I hope this is not true. From a thread
> > > from Ryan
> >
> > >http://groups.google.com/group/google-appengine/browse_thread/thread/.
> ..
> > > he states that
> >
> > > >as for the datastore, and all other current stored data APIs like
> > > >memcache, there is a single, global view of data. we go to great
> > > >lengths to ensure that these APIs are strongly consistent.
> >
> > > Regards
> > > Kim
> >
> > > On Jun 17, 8:51 pm, Kim Riber <kimsteenri...@gmail.com> wrote:
> > > > To clarify a bit:
> >
> > > > one thread from our server runs one loop with a unique id.
> > > > each requests stores a value in memcache and returns that value. In
> > > > the following request, the memcache is queried if the value just
> > > > written, is in the cache.
> > > > This sometimes fail.
> >
> > > > My fear is that it is due to the requests changing to another app
> > > > instance and then suddently getting wrong data.
> >
> > > > instance 1 +++++  +++++
> > > > instance 2      --
> >
> > > > Hope this clears out the example above a bit
> >
> > > > Cheers
> > > > Kim
> >
> > > > On Jun 17, 7:52 pm, Kim Riber <kimsteenri...@gmail.com> wrote:
> >
> > > > > Hi,
> > > > > I'm experiencing some rather strange behavior from memcache. I
> think
> > > > > I'm getting different data back from memcache using the same key
> > > > > The issue I see is that when putting load on our application, even
> > > > > simple memcache queries are starting to return inconsistant data.
> When
> > > > > running the same request from multiple threads, I get different
> > > > > results.
> > > > > I've made a very simple example, that runs fine on 1-200 threads,
> but
> > > > > if I put load on the app (with some heavier requests) just before I
> > > > > run my test, I see different values coming back from memcache using
> > > > > the same keys.
> >
> > > > > def get_new_memcahce_value(key, old_value):
> > > > >     old_val = memcache.get(key)
> > > > >     new_val = uuid.uuid4().get_hex()
> > > > >     reply = 'good'
> > > > >     if old_val and old_value != "":
> > > > >         if old_val != old_value:
> > > > >             reply = 'fail'
> > > > >             new_val = old_value
> > > > >         else:
> > > > >             if not memcache.set(key, new_val):
> > > > >                 reply = 'set_fail'
> > > > >     else:
> > > > >         reply = 'new'
> > > > >         if not memcache.set(key,new_val):
> > > > >             reply = 'set_fail'
> > > > >     return (new_value, reply)
> >
> > > > > and from a server posting requests:
> >
> > > > > def request_loop(id):
> > > > >     key = "test:key_%d" % id
> > > > >     val, reply = get_new_memcahce_value(key, "")
> > > > >     for i in range(20):
> > > > >         val,reply = get_new_memcahce_value(key, val)
> >
> > > > > Is memcache working localy on a cluster of servers, and if an
> > > > > application is spawned over more clusters, memcache will not
> > > > > propergate data to the other clusters?
> >
> > > > > I hope someone can clarify this, since I can't find any post
> regarding
> > > > > this issue.
> >
> > > > > Is there some way to get the application instance ID, so I can do
> some
> > > > > more investigation on the subject?
> >
> > > > > Thanks
> > > > > Kim
> >
> > --
> > Nick Johnson, App Engine Developer Programs Engineer
> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> Number:
> > 368047
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to