I recently attempted to improve the responsiveness of one of my app's
more elementary handlers by using memcache to cache the datastore
lookups. According to my logs, this has had a positive effect on my
api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
seen a similar decrease, and hovers around 1000ms.

Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
worry about performance issues around deserializing model instances in
memcache?

My caching strategy looks like this:

response = dict() # (might not be empty)
cached = memcache.get(__CACHE_KEY)
if cached:
  response.update(cached)
  return
else:
  # datastore calls
  foo = get_foo()
  bar = get_bar()
  # build cache object
  cached = dict(foo=foo, edits=bar)
  response.update(cached)
  # cache
  memcache.set(__CACHE_KEY, cached)
  return
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to