Currently when sqlalchemy performs a polymorphic lookup, it queries
against all the detail tables, and returns the correct  Python object
represented by the polymorphic identity. Essentially you get a sub
select for each detail table that is included in your primary join
even though only one of the detail tables contains the data that is
specific to the identity of the object. Is there currently a way, or a
plan to support, splitting the polymorphic query into two queries? The
first would get the base table, the second would retrieve the details
based on the discovered table. This way only two tables would be
queried instead of n where n is the number of polymorphic identities.

Our DBAs have concerns that as our tables grow, possibly to the size
of 2.5million rows, that unioning against multiple tables, despite the
fact that we are unioning against a primary key, will become non-
performant. I know I could write a custom mapper to resolve this
issue, however, I thought I would bring this up since it may affect
other users, and there may already be a way to solve this easily of
which I am not aware.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to