For some reason, when I do

x.save()     over an over, it takes about 200k, too.

either I have something really messed up, or there is a real problem
with this api.



On Aug 28, 10:34 am, "ristretto.rb" <[EMAIL PROTECTED]> wrote:
> To summarize - I double checked that DEBUG=False and ran all the tests
> again:
>
>             #  20k per call
>             x = Page.objects.filter(
>                         #(Q(type='P') & (Q(status='N') |
> Q(status='R'))) &
>                         #(Q(last_action_datetime__isnull=True) |
> Q(last_action_datetime__lt=cut_off))).count()
>
>             #  4K-12K
>             x= Page.objects.filter(type='P', status='N').count()
>
>             #  4K-12K
>             x = Page.objects.filter(type='P').count()
>
>             #  4K-12K
>             x = Page.objects.filter(status='N').count()
>
>             #  0K
>             x = Page.objects.all().count()
>
>             #  0k
>             x = Page.objects.extra(where=
>                         ["(type='P' and (status='N' or status='R') and
> \
>                         (last_action_datetime is null or
> last_action_datetime < '2009-1-1'))"]).count()
>
> On Aug 28, 10:17 am, "ristretto.rb" <[EMAIL PROTECTED]> wrote:
>
> > Previously I found get_or_create() to be leaky, so I stopped using
> > it.  Now I'm seeing filter() to be leaky.
>
> > First, yes, I have DEBUG=False, and I have double checked that
> > connection.queries is empty.
>
> > Suppose the following model
>
> > class Page(models.Model):
> >     type = models.CharField(max_length=1,default='P')
> >     status = models.CharField(max_length=2,default='N')
> >     last_action_datetime = models.DateTimeField(null=True)
>
> > All memory checked by looking for VmRSS: in  /proc/{pid}/status - on
> > linux (Resident)
>
> > If I call the following using filter() and Q()
>
> > x = Page.objects.filter(
> >                         (Q(type='P') & (Q(status='N') |
> > Q(status='R'))) &
> >                         (Q(last_action_datetime__isnull=True) |
> > Q(last_action_datetime__lt=cut_off))).count()
>
> > repeatedly, memory climbs by about 50KB / loop.
>
> > Suspecting Q, I tried this
>
> > x = Page.objects.filter(type='P', status='N').count()
>
> > which increases at about 4-16KB per / loop
>
> > as does
> > x = Page.objects.filter(type='P').count(), and
> > x = Page.objects.filter(status='N').count()
>
> > If I change it to
>
> > x = Page.objects.all().count()
>
> > Memory stays constant, but that isn't filtering enough.  So I tried
>
> > x = Page.objects.extra(where=
> >     ["(type='P' and (status='N' or status='R') and \
> >     (last_action_datetime is null or last_action_datetime <
> > '2009-1-1'))"]).count()
>
> > Memory stays constant for this one too.  Although this isn't the best
> > DRY form, it doesn't seem to leak.
>
> > At this point, I figure that if you plan on making lots of calls that
> > would use filter() probably use extra instead.
>
> > Clearly, I'm taking the django db layer, probably initially designed
> > to support webdev, and I'm using it to do heavy scripting.  It works
> > great, other than these memory problems.  And since we are also
> > building a webapp, it follows the DRY model to data access.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to