On Mar 27, 8:07 am, [EMAIL PROTECTED] wrote:
> On Tuesday 27 March 2007 17:18:04 Michael Bayer wrote:> On Mar 27, 2007, at 
> 3:53 AM, [EMAIL PROTECTED] wrote:
> > > Lets say u have 5 leaf types in 4-5 levels of the hierarchy tree,
> > > say that makes 10 tables total.
> > > say E object is a leaf and comes from A.join(B).join(C).join(E) -
> > > so E is split amongst all A B C E tables. Which is the detail
> > > table?
>
> > you have to understand....nobody inherits more than one level deep
> > in almost any inheritance situation.   thats just you :)
>
> yeaa. seems so.
>
> > he means:
>
> > select * from base_table where id=7
> > <fetch row> -> type of object is "A"
>
> > select * from joined_table_A where id = 7
> > <fetch row, assemble into A instance>
>
> > -> done
>
> i got it this far; but this is applicable only for single lazy
> relation. if i want all them for which name.starswith(abc), and some
> are A, some are B, some are XYZ? Then instead of 1 big huge
> polymoprhic thing, i have to issue n queries (where n is number of
> leaf types), that is all those selectables that go in the
> polymunion's dictionary. i'm sure this could be faster in some cases
> (simple means fast), but then just use non-polymorphic selectables
> direct and keep the polymorphism "switching" on the python side --- a
> (pseudo-)polymorphic-mapper that does polymorphism in python only but
> issues several direct selectables to SQL.
> Something of sorts?

In your case where you are looking for name.startswith(abc), from my
limited understanding of the sql query architecture as you said you
would end up issuing one huge query joining against the n the leaf
tables. As you also mentioned doing the join per item retrieved from
the trunk table, would cause you to do n joins where n is the number
of items retrieved. Essentially it comes down to how the queries break
down as far as the database retrieval plan. If the database has
indices setup to handle the queries that sqlalchemy generates, then
you will probably get good performance, if however each query causes a
full table scan, then performance will start to drop off as the table
size increases.

I think I agree with Michael's thought that the better performing the
database the less object like it becomes. Most of my database has been
normalized for performance reasons, and because of this it does not
translate well to objects.


In the end we will probably end up experimenting with a custom mapper
to handle the joins, thus issuing two separate queries per polymorphic
object, and the sqlalchemy polymorphic join, and see which fares
better under a loaded database.

Thank you guys for all your input.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to