We have a process that splits a job between backend instances and 
"synchronizes" via memcache (not the best method, I know, but it has been 
working for us). Earlier in the day, we started seeing weird behavior. Its 
one or more of the following:

1. Backends are acting funky 
2. Memcache latency/issues
3. URL Fetch latency/issues
4. Google Cloud Storage latency/issues
5. Task Queues are scheduling weirdly

>From what I can see, tasks are being executed fine. The expected completion 
time of the tasks, however, are taking a lot longer than anticipated (5-10x 
longer). I also see weird behavior of the "countdown" mechanism that 
synchronizes parts of the job (based on memcache). Sporadic updates and 
random resets down to '0' and back up to a more believable number. I wonder 
if atomic incr/decr calls to memcache are not finding previous values 
properly? If I had to guess, I think the issue is with either 1, 2, or 4 
listed above. Other parts of my app hit the same URL endpoints this process 
hits, and they are running fine. As aforementioned, memcache values seem to 
be acting the weirdest, so I think the issue might reside there.

Has anyone else noticed weird behavior with memcache in recent hours?

Thank you,
Prateek




-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to