Thanks, Tony. I'll try the profiling and post again if I discover
anything interesting.

On Jun 22, 11:36 am, Tony <fatd...@gmail.com> wrote:
> I see, I didn't realize you were just calling the dict method.  In
> that case, 1000ms seems unusually high.  Still, it seems unlikely that
> memcache usage is causing it.  Your best bet is to profile requests
> (http://code.google.com/appengine/kb/commontasks.html#profiling) to
> this handler and see where the cpu time is being spent - you might
> have some large imports or something elsewhere that's causing a
> performance drop.
>
> On Jun 22, 2:29 pm, John Tantalo <john.tant...@gmail.com> wrote:
>
> > Tony,
>
> > The "update" call is the standard dict.update[1], which should be
> > plenty fast for my purposes.
>
> > My data is actually under a kilobyte, so I am quite confused why it
> > would take nearly 1000ms in CPU.
>
> > Here's an example of the data (in yaml format) with some personally
> > identifying information stripped out:
>
> >http://emend.appspot.com/?yaml
>
> > The actual data being cached is slightly larger, but not by much.
>
> > [1]http://docs.python.org/library/stdtypes.html#dict.update
>
> > On Jun 22, 11:06 am, Tony <fatd...@gmail.com> wrote:
>
> > > Without knowing more about your app, I can't say for sure, but it
> > > seems likely that whatever processing takes place in "response.update
> > > (object)" is using your cpu time, which is why you don't see much of a
> > > speedup via caching here.  I would suggest profiling the operation to
> > > determine what function call(s) are specifically taking the most
> > > resources.  In my experience, you won't notice a large difference in
> > > cpu usage between serializing model instances to memcache vs. adding
> > > identifier information (like db keys) for fetching later.  My entities
> > > are small, however, your mileage my vary.  I find that the primary
> > > tradeoff in serializing large amounts of info to memcache is in
> > > increased memory pressure and thus lower memcache hit rate, higher
> > > datastore access.
>
> > > On Jun 22, 12:48 pm, John Tantalo <john.tant...@gmail.com> wrote:
>
> > > > I recently attempted to improve the responsiveness of one of my app's
> > > > more elementary handlers by using memcache to cache the datastore
> > > > lookups. According to my logs, this has had a positive effect on my
> > > > api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
> > > > seen a similar decrease, and hovers around 1000ms.
>
> > > > Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
> > > > worry about performance issues around deserializing model instances in
> > > > memcache?
>
> > > > My caching strategy looks like this:
>
> > > > response = dict() # (might not be empty)
> > > > cached = memcache.get(__CACHE_KEY)
> > > > if cached:
> > > >   response.update(cached)
> > > >   return
> > > > else:
> > > >   # datastore calls
> > > >   foo = get_foo()
> > > >   bar = get_bar()
> > > >   # build cache object
> > > >   cached = dict(foo=foo, edits=bar)
> > > >   response.update(cached)
> > > >   # cache
> > > >   memcache.set(__CACHE_KEY, cached)
> > > >   return
>
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to