Hello,

I have been seeing a big performance degradation with my application
in production.
The traffic is roughly 2.5K pageviews per day. I can expect each page
to load 100 model objects in memory.

Some might overlap, but I have a large inventory of objects to show.

I have noticed that pages have been taking longer and longer to load,
at a point where its unacceptable.
Looking at the wsgi processes, i found that they a request seems to be
taking up a large amount of CPU usage.

I've been poking around to see how I could improve things, and I've
noticed this behavior:

from project.app.models import Model
m = Model.objects.all()[200:300]
len(list(m))

This takes several seconds.

from project.app.models import Model
m = Model.objects.all()[400:500]
len(list(m.values()))

This is much faster. If you're gonna try it, make sure you choose a
range of objects that are not already in memory.

Is the only difference between the two object allocation?
If object allocation is what is costing me so much CPU power, how can
I get around it?
Is using the values method the only option?

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en.

Reply via email to