On Feb 3, 6:19 pm, Malcolm Tredinnick <malc...@pointy-stick.com>
wrote:
> "Default" iteration over a queryset (using __iter__) also caches the
> instances in memory so that they can be reused without requiring extra
> database queries. Normally, the memory overhead is negligible, since the
> queryset is being used to put results on, say, a web page, when a
> billion rows isn't the right number to send back.
>
> If you want to iterate over the results without that caching behaviour,
> using the iterator() method on the Queryset, rather than the __iter__()
> method. Thus:
>
>         for i in qs.iterator():
>            ...
>
> There will still be a lot of memory used here (proportional to the
> number of results returned, which in your case is "a lot"), since the
> database wrapper (psycopg or whatever) will pull all the data back from
> the database. That will still be a multiple less than also keeping
> copies of the Python objects, but it won't be zero. Using a pagination
> equivalent, as your doing, could well be the best solution if you want
> to keep memory at an absolute minimum.
>

Thanks, Malcolm, for the detailed explanation.  Using iterator()
worked great!  While it still took more memory than my manual slicing
method (but still a manageable amount, in my case), it reduced the
number of queries to the database, which are time consuming.

Regards,
Casey
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to