Oh! beautiful code! could you paste Dependency model besides?


On 15 sep, 03:41, "Honza Král" <[EMAIL PROTECTED]> wrote:
> we use something likehttp://dpaste.com/19671/
>
> it
>   - invalidates the cache when object is updated (based on registered test)
>   - can cooperate with apache active mq to propagate the signals
> accross multiple boxes
>   - provides decorator for any function, including handling the
> invalidation registry
>   - provides a simple function to retrieve a single object
>   - records cache dependencies as it creates the caches
>
> this is used throughout the application - we cache individual objects,
> we cache their representation (so-called boxes) - a small template
> that is used to represent a single objects. When a box is rendered
> inside another, a dependency information is created and when the cache
> gets invalidated, it is taken into account.
>
> We are still in early stages with this, but the foundation seems
> solid. We still have a fine-grained control over what gets cached and
> what not (caching the entire page simply isn't an option), the
> invalidation is taken care of automatically.
>
> On 9/15/07, David Cramer <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > So in the past months, we've tried several different methods of
> > handling caching over at Curse. Curse.com for the unfamiliar, we're a
> > very high traffic dynamic content driven website.
>
> > - Caching frequently used querysets with cache.get()/set()
> > - Caching entire views
> > - Caching middleware
>
> > Now we've been presented with a few problems.
>
> > - View caching sucks for us, as views have to be invalidated all of
> > the time. This CAN be good for some pages, such as entrance pages,
> > that get 30-40 req/s on a slow day.
> > - Caching middleware won't work for us, we can't cache the entire
> > site. Even if they weren't logged in it'd still expire all the time.
> > - .get()/.set() rock, but thats way too much code to always write, and
> > invalidation sucks.
>
> > So, what are some solutions:
>
> > 1. We continue to use view caching for entrance pages, and find some
> > way to standardize keys for .get()/.set()
>
> > The first option can work for some, but is way too time consuming. You
> > have to worry about invalidation all the time, and you still need to
> > make sure your cache keys are standardized. We invalidated content
> > through save() and delete() methods, but this was iffy.
>
> > 2. We develop some uber system, such as the CacheManager we created a
> > while back, but with invalidation that actually works.
>
> > This is where I'm asking everyones assistance. Throw any idea you
> > have, as this can help not only us, but a LOT of users out there.
>
> > The ideal system would run off an events driven cache.
>
> > - Cache would not have expire times, but rather expire when it's told
> > to (signals/save and delete overrides?).
> > - The system would generate its own cache keys, and even cache
> > requests on its own to prevent inconsistent naming and slipups on
> > missing cache calls when needed.
> > - Different mechanisms for different data sets. get() requests would
> > only expire when events tell it to, .filter() and so on would need to
> > be configurable on a per-content(model) basis.
>
> > I believe the CacheManager (a metaclass on your models) was a good
> > solution, but may not be ideal. The way it functions is it overrides
> > the objects method, and also provides a no_cache method. This works
> > great, and if you could override save/delete (we couldn't seem to) you
> > could provide invalidation right there for a lot of stuff. We still
> > run into the issue of .filter() and .all() caching.
>
> > In our situation pages where we use .filter and .all usually don't
> > update instantly, they expire generally from 15 minutes to 1 hour, or,
> > on requests where we use sharding, they expires the same as a .get()
> > would, by using save/delete overrides.
>
> > Bringing up sharding, is another optimization we did:
> > CachedForeignKey. This acted as a preliminary sharding mechanism which
> > cached the entire result set in memory from the table and cleared it
> > when needed. These were used on keys pointing to tables which would
> > hold less than 50-100 entries and used often.
>
> > A good example of the benefits of sharding, and caching the data from
> > it:
>
> > class Category(object):
> >         parent = models.CachedForeignKey('self')
> >         name = models.CharField()
>
> >         def save(self, update_cache=True):
> >                 cache.delete(self.parent.get_cache_key())
>
> >         def delete(self, update_cache=True):
> >                 cache.delete(self.parent.get_cache_key())
>
> > class MyModel(object):
> >         category = models.CachedForeignKey(Category)
> >         name = models.CharField()
>
> > This allows `MyModel.objects.all()` to give you the same results as
> > `MyModel.objects.select_related('category__parent')`
>
> > So, ideas, and usecases are greatly welcomed :)
>
> --
> Honza Kr?l
> E-Mail: [EMAIL PROTECTED]
> ICQ#:   107471613
> Phone:  +420 606 678585


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to