On 11.07.2007., at 19:27, David Cramer wrote:

> That's an interesting solution. My main reason for not doing that is
> because I've been told the dispatcher sucks (it's slow) and we're
> going for speed.

Using CachedModel as a base class will be ideal, but ... this will  
not work until model subclassing is fixed. And I really hate to write  
save/delete methods just to call manager(s) clean() :)

But if the speed is critical one can always choose not to use  
track_cache helper and instead add appropriate clean() calls to the  
model save/delete methods and avoid dispatcher overhead.

> I don't see the reasoning for adding QUERY_ to the cache_key. By
> default the cache_key is your db_table. So if your model is
> myapp.HelloWorld it will most likely be myapp_helloworld.

QUERY_ is removed in the last version: http://dpaste.com/hold/14122/

Now there is a CQS_ prefix added to the cache keys just to (a little)  
increase key uniqueness.

> But with
> memcached and most caching solutions at the moment you cant do simply
> invalidation with the engine. However, based on what they were talking
> about with memcached's idea, you could set a generation counter in
> memory to force a refresh of the entire cache, or parts of the cache,
> or even handle it like the memcached proposal would.

(if I understood this correctly)

With the current cache back-ends there are no way to do mass key  
deletion based on some criteria (it will be great if I can do  
something like this:
[cache.delete(c) for c in cache.keys() if c.startswith('<key_prefix>')]
) and there is no way (well, I can make *another* global registry and  
some fancy way to register cache keys there, still per thread/process  
only) to know which keys to delete when clean() is called.

Because of that I choose to keep all of the related caches under one  
cache key (CQS_<app_name>_<model_name>) so I can delete (and  
invalidate) all of the keys when data is changed. Downside of this  
approach is that if you have some big rows and they are cached it can  
suck-up a lot of the memory due duplicate data, maybe I can just  
store key names there instead of the full data -- I'll try this in  
the morning.

locmem back-end (implemented as a thread local storage, IIRC) is not  
safe to use with this - well, nothing bad will happen, just there are  
possibility of getting stale data from the cache because some other  
process can update data without invalidating cache in the current  
process.

Other back-ends are safe from this IMHO.

To be on the safe side one can make models like this:

clas Foo(models.Model):
        objects = models.Manager() # AKA _default_manager
        cached = CachedManager()
        ...
track_cache(Foo) # lazy and slower way of doing cache invalidation ;)

and use Foo.cached for the cached access and Foo.objects for the  
direct one.

-- 
Nebojša Đorđević - nesh, ICQ#43799892, http://www.linkedin.com/in/ 
neshdj
Studio Quattro - Niš - Serbia
http://studioquattro.biz/ | http://code.google.com/p/django-utils/
Registered Linux User 282159 [http://counter.li.org]



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to