Hi,

I have a table with millions of rows that I want to iterate through
without running out of memory, and without waiting a long time for all
rows to be loaded.

Looking in the documentation, it seems that .yield_per(count) does
what I want (I've read the warnings, and I'm not doing anything with
complicated objects). However, when I perform the query:

for myobj in sess.query(MyObj).yield_per(10):
    print myobj.id

the process starts growing in memory as if I was doing

for myobj in sess.query(MyObj).all():
    print myobj.id

I also tried just querying the id field (that would be good enough for
my purposes),

for myid in sess.query(MyObj.id).yield_per(10):
    print myid

and I get the same thing - nothing gets printed, and the process
starts using more and more memory as it seems to load all rows into
memory.

Am I doing something wrong, or a I misunderstanding yield_per?
I'm running SQLA v0.5.0 with MySQL on Ubuntu.

Thanks,

Sam
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to