This appears to be a proposal to re-implement triggers inside Django.

I can see there are benefits if the underlying DB platform won't support
triggers, but wouldn't triggers be the preferred solution when they're
available? That way there is no chance that changes can be made outside
the scope of the denormalization, and hence no need to recompute the
denormalized values.

regards
 Steve

Andrew Godwin wrote:
> David Cramer wrote:
>   
>> If you're not doing denormalization in your database, most likely
>> you're doing something wrong. I really like the approach that is
>> offered here.
>>
>> For me, personally, it would be great if this could accept callables
>> as well. So you could store the username, like so, or you could store
>> a choices field like:
>>
>>     field = models.IntegerField(choices=CHOICES)
>>     denorm = models.DenormField('self', 'get_field_display') # which
>> would rock if it was .field.display ;)
>>
>> You could also make it somehow accept querysets or something similar
>> for things like count(). I see a lot of possibilities and am a bit
>> disappointed I didn't come up with something this easy for my use-
>> cases.
>>   
>>     
>
> The key is making sure you can listen for changes on whatever's at the 
> other end of your denormalisation. With my current snippet, it listens 
> for a save on the model the foreignkey points to, then checks for the 
> right ID; if we start accepting random querysets, then there has to be a 
> way to resolve that back to conditions the signal listener can understand.
>
> Still, with an 
> AggregateField(Sandwiches.filter(filling="cheese").count()) it's still 
> possible to work out that you want to listen on the Sandwiches model, 
> and you could then fall back to re-running the count on every Sandwich 
> save, even if it ends up not having a cheese filling.
>
> So, I think the best approach would be one to replicate fields (like my 
> current DenormField; perhaps call it CopyField or something) and one to 
> cache aggregates (an AggregateField, like above).
>
> Simon Willison wrote:
>   
>> Just so it's on the record, I'd like any denormalisation tools in
>> Django to include a mechanism for re-syncronizing them should
>> something go awry (like direct updates being applied to the database
>> without keeping the denormalised fields in sync). This mechanism could
>> then be exposed as a ./manage.py command which could be called
>> periodically to verify and fix any incorrect data.
>>     
> Yes, this I very much agree with. The reason you always layer this stuff 
> on top of a pre-normalised database is because you can then rebuild the 
> data after playing with it externally.
>
> Doing so shouldn't be too much of a problem; have a management command 
> that loads the models, and then just executes the update method on each 
> of the denormalisationalish fields.
>
> Justin's idea of lazy updating is interesting, and probably quite doable 
> (as well as what most people will want by default on aggregate queries).
>
> I'm also hoping any kind of aggregate denormalisation will work with any 
> future extended aggregate support, but if the field just takes a normal 
> QuerySet, that might Just Work™.
>
> Andrew
>
>
> >
>
>   



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to