James Bennett napsal(a):
> On 8/27/07, Jeremy Dunck <[EMAIL PROTECTED]> wrote:
>> QuerySet.iterator does what you want.
> 
> I was going to follow up with a documentation link, but it appears we
> lost the documentation for QuerySet.iterator at some point. Opened a
> ticket
> 
> In any case, Jeremy's right: the "iterator" method returns a generator
> which fetches the data in chunks and only instantiates objects when
> they're actually needed, yielding them one at a time as you iterate
> over it. So you can replace a call to 'all()' with a call to
> 'iterator()', or chain 'iterator()' on after a call to 'filter()', and
> you should see greatly improved memory usage for situations where
> you're dealing with huge numbers of objects.
> 
> 
Thanks for responses. I look to iterator() code and it does this thing 
ok. With some experimenting I saw that mysql backend is probably the 
worst. Even with iterator mysql sends whole result set and Python DB 
imitates cursor() semantics. As I said I get 2GB of memory footprint. 
With SQLite I got only 20MB with same code. So it looks that I have to 
more think about switching to some other DBMS or to use explicit slicing.

For me it could be more appropriate to change iterator() to do some 
slicing for me (by explicit LIMIT clause), maybe a small patch for our 
application. I understand, that changing it in general would be a bad 
design decision.

So again, thank for help.

-- 

                        Tomas Kopecek
                        e-mail: permonik at mesias.brnonet.cz
                         ICQ: 114483784

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to