Ok, so SQLAlchemy has this nice feature where you can eager load 
relationships to significantly reduce the number of queries during 
processing.

On the other hand, to reduce memory usage you can use yield_per() (on 
Postgres) to significantly reduce the memory usage by not loading the 
entire database in memory at once.

For very good reasons mentioned in the documentation you can't use both of 
these in the same query, yet that is kind of my goal. What I'd like to 
achieve, for a given query which goes over a big table:

while not end of resultset:
   take 1000 results
   eagerload all the relationships
   process them

Now, the eager loading part is posing difficulties (or I'm not reading the 
documentation carefully enough). I found the 
attributes.set_committed_value() 
<http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.attributes.set_committed_value>
 
function which solves half the problem, but I still need to generate the 
actual query to return the necessary objects. So perhaps (pseudo-code):

def eagerload_for_set(object_list, relationship)
   ids = set(o.get(relationship.left_id) for o in object_list)
   lookup = Query(relationship.right_table).filter_by(relationship.
right_column.in_(ids)).all()
   for o in object_list:
       o.set_committed_value(o, relationship.left, lookup[relationship.
left_id])

Before I go diving into the SQLAlchemy to make the above actually work, 
does it seem reasonable? Are there are handy utils somewhere that might 
help?

Thanks for any ideas,

Have a nice day,

-- 
Martijn van Oosterhout



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to