Hello,

We're currently investigating the upgrade from SQLAlchemy 0.8 to 1.2. Right 
off the bat, we've encountered the well-documented change of how yield_per 
will now throw an exception for queries with joined collection eager 
loading. We've got an intermediary piece of software that basically handles 
all incoming ORM Queries and run them through an iterator that currently is 
set up to use yield_per(500). We have quite a few places where joinload() 
is used There are a couple of places where a row-based join is used (which 
is deduplicated by SQLAlchemy of course). I'm eager to know the best 
practices on how to approach this in our upgrade.

1) From previous research into documentation, discussion forums, and SO 
threads, it appears yield_per hasn't been compatible with these types of 
Queries and the change to throw an exception was made in 1.0. What was 
happening in 0.8 when an ORM Query was made using eager loads?

2) Due to obvious memory/performance concerns, we'd like to keep yield_per 
based on its intended use case and functionality behind the scenes. 
Depending on the answer to #1, I'm aware that it may have not been doing 
what was expected - it's quite possible that it simple was setting the 
execution_options.stream_results=True. Setting that option to True (which 
is implementing a server-side cursor) is likely what we need to continue to 
have implemented... so would that be an option in place of yield_per?
  a) if so, setting that option seems to cause a FETCH FORWARD ALL instead 
of what we were typically seeing with yield_per. How can we see the count 
limited with FETCH FORWARD?

3) Would a better replacement/workflow for this involve manually using 
Query.slice(x, x)? It appears that using .slice() doesn't really implement 
a server-side cursor which is fine as long as we keep the count of objects 
returned to a low number but having to manually manage the "slices" of 
results isn't ideal.
  a) When using slice and the SQLAlchemy wizardy of deduplication takes 
place, I believe the LIMIT/OFFSET inclusion in the query could be 
detrimental to the results set as a whole - is that correct?

Thank you for your time!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to