Check out the cache utility in gaeutilities. 
http://gaeutilities.appspot.com/cache

Looking at the demo, it appears I need to update that page. Anyhow,
cache uses both the datastore and the memcache.

When you write a cache entry, it writes to the datastore, then to
memcache.
When you attempt to read a cache entry, it first tries memcache, then
the datastore.
If there's a hit in the datastore, but not the memcache, it populates
the memcache.

Supports timeout functionality (my cache hit is only good for 5
minutes), and can be used as a standard dictionary object

c = cache.Cache()
c['cachehit'] = "test value"
if 'cachehit' in c:
    do_something()

It was originally written before appengine had memcache support, and
was updated when that was provided.

On Mar 3, 8:02 am, Jonathan <jricket...@gmail.com> wrote:
> I am using a restful interface for an ajax application and want to be
> able to store the results of queries in memcache, as much of this data
> is read much more often than it is written, but it is occasionally
> written.
>
> I have been trying to think of strategies for how to do this, whilst
> also maintaining the ability to invalidate the cache when necessary.
>
> so for example:
> the user requests page 1 of their objects (0-9) and I store them with
> a key of "modelName-userName-pageNum"
> the user requests page 2 of their objects (10-19) and I store them
> with a key of "modelName-userName-pageNum"
> the user modifies an object on page 2, (or deletes it, or creates a
> new one) and I want to invalidate all "modelName-userName" cached
> lists.
>
> how do I do this???
>
> jonathan
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to