[sqlalchemy] Dabo ReportDesigner/Writer

2013-06-04 Thread Werner

Hi,

I am looking at Dabo's ReportDesigner/Writer to do reporting (looked at 
others like PythonReports and Geraldo) but both have problems with large 
text blobs.


It wants the data as:

Dabo's reportwriter wants dataset-like structures, which is a sequence 
(list, tuple) of mappings (dict), where each key in the mapping is a 
field name, and each item in the outer sequence is a record.


I managed to figure this out:

result = session.query(db.Cellarbook.avgpurchaseprice,
   db.Drinkinfo.name)
result = result.join(db.Drinkinfo)

print result

for row in result.all():
print row.__dict__['name']


Is using row.__dict__ the correct way or are there other ways?

Best regards
Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] EXTRACT() not working as expected in PostgreSQL with TIMESTAMP WITH TIMEZONE

2013-06-04 Thread Martijn van Oosterhout
I have just upgraded sqlalchemy and am running some tests on some old code 
and getting some very strange results.

I have a table with a column defined as TIMESTAMP WITH TIMEZONE:

test_table = Table('test', metadata,
Column('id', Integer,  primary_key=True),
Column('data', UnicodeText, primary_key=True),
Column('ts', DateTime(timezone=True), nullable=False),
)

And I then have a query which uses it like so (TestTable is the ORM mapping 
of test_table):

session.query(extract('epoch', database.TestTable.ts))

and this is producing output like:

SELECT EXTRACT(epoch FROM test_table.ts :: timestamp);

That cast is a problem, since casting a timestamptz to a timestamp removes 
the timestamp info, which means you get a different answer, since it's now 
interpreted as UTC, which means the answer is (in my case) two hours off:

# select extract(epoch from '2013-06-02 11:23:45+02'::timestamptz);
 date_part  

 1370165025
(1 row)

# select extract(epoch from '2013-06-02 
11:23:45+02'::timestamptz::timestamp);
 date_part  

 1370172225
(1 row)

This cast seems to have been introduced in response to 
http://www.sqlalchemy.org/trac/ticket/1647 which seems to completely ignore 
the WITH TIMEZONE case. I tested it on 0.5.4, but on 0.7.6 it's definitely 
broken. What I don't understand is why the cast is there at all. Unless the 
argument is text postgresql can work out the type by itself just fine.

I think it's a bug and if people agree I can file one. But is there a 
workaround to suppress the cast?

Thanks in advance,
-- 
Martijn van Oosterhout klep...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: EXTRACT() not working as expected in PostgreSQL with TIMESTAMP WITH TIMEZONE

2013-06-04 Thread Martijn van Oosterhout
For the record and for other people running into the same problem, here's a 
workaround that kills the cast by wrapping the column in a function that 
does nothing:

session.query(extract('epoch', func.timestamptz(database.TestTable.ts)))

This of course will only work until sqlalchemy learns about all the builtin 
casts, but for now it's fine.

Have a nice day,

On Tuesday, June 4, 2013 10:53:22 AM UTC+2, Martijn van Oosterhout wrote:
snip 

 I think it's a bug and if people agree I can file one. But is there a 
 workaround to suppress the cast?

 Thanks in advance,
 -- 
 Martijn van Oosterhout klep...@gmail.com



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 8:18 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.
 
 I have a hard time to understand the following comment for Query.yield_per():

   Yield only ``count`` rows at a time.
 
WARNING: use this method with caution; if the same instance is present
in more than one batch of rows, end-user changes to attributes will be
overwritten.
 
In particular, it's usually impossible to use this setting with
eagerly loaded collections (i.e. any lazy='joined' or 'subquery')
since those collections will be cleared for a new load when
encountered in a subsequent result batch.   In the case of 'subquery'
loading, the full result for all rows is fetched which generally
defeats the purpose of :meth:`~sqlalchemy.orm.query.Query.yield_per`.
 
Also note that many DBAPIs do not stream results, pre-buffering
all rows before making them available, including mysql-python and
psycopg2.  :meth:`~sqlalchemy.orm.query.Query.yield_per` will also
set the ``stream_results`` execution
option to ``True``, which currently is only understood by psycopg2
and causes server side cursors to be used.
 
 Suppose I have a code like this:
 
q = session.query(cls).filter(...)
q = q.options(
joinedload(cls.foo),
   subqueryload(cls.bars),
)
for each in q.yield_per(50):
# access each.foo or each.bars
 
 Does it work? Is so, why? If not, why?

assuming cls.foo is a many-to-one, it will produce the correct result, but will 
be far worse in terms of memory and performance, as the subqueryload() call 
will be invoked for each distinct batch of 50 rows, across the *full* result 
set.  So if your result has 1000 rows, and the number of bars total is 1, 
you will load 1 additional rows for *each* yield per of 50.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] nested inheritance / polymorphic relationships

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 1:55 AM, Amir Elaguizy aelag...@gmail.com wrote:

 Hi there,
 
 I have a tree that looks like this, reflected via polymorphic inheritance:

what do we mean reflected here, are you reflecting tables from the database, 
that is, 
http://docs.sqlalchemy.org/en/rel_0_8/core/schema.html#metadata-reflection ?   

 
 That works great, like:
 
 
 class BaseModel(db.Model): # Table A in diagram
 __tablename__ = entities
 
 id = db.Column(db.BigInteger, primary_key=True, nullable=False, 
 server_default=func.nextval('guid_seq'))
 type_id = db.Column(db.SmallInteger, db.ForeignKey(EntityTypesModel.id))
 
 __mapper_args__ = {
 'polymorphic_identity':'entity',
 'polymorphic_on':type_id,
 'with_polymorphic':'*'
 }

this is why I question the word reflected because I don't see you using 
reflection there.  

 
 class BrandModel(BaseModel):   # Table B, C, D in diagram
 __tablename__ = 'brands'
 
 id = db.Column(db.BigInteger, db.ForeignKey(StufffModel.id), 
 primary_key=True, nullable=False)
 name = db.Column(db.String, nullable=False)
 
 __mapper_args__ = {
 'polymorphic_identity':ET_BRAND,
 }

Im confused by this as well - are you saying that you map the same class to B, 
C, and D rows?   That would be unusual.  It wouldn't work at all on the 
persistence side as SQLAlchemy could not know which of B, C, or D you wish for 
a particular BrandModel to be persisted towards.


 
 
 The problem is I need to reflect something more like this:
 
  A
   /   |   \
 B   C   D
  /   \
EF
 
 Where D is not only a polymorphic child of A but also the polymorphic parents 
 of E  F.
 
 It seems like I have to choose, D can either be a polymorphic child or it can 
 be a parent - it can't be both.
 
 Do I have any options here?

SQLAlchemy can represent inheritance hierarchies of any depth.However, 
because you are assigning a single subclass to all of B, C, and D that might be 
why there's an issue here, you'd need to assign a distinct subclass of 
BaseModel to at least D, and then another subclass of D_Model to handle E and 
F.

Preferably, you'd produce distinct classes for all six tables.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Changing a value linked with a one-to-one relationship

2013-06-04 Thread Etienne Rouxel
Hello

I would like to change a value in a one-to-one relationship but I cannot 
because of some actions that SQLAlchemy try to do, and I don't know why.
Here is my simplified code :

# -*- coding: utf-8 -*-
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

_taxon_table = Table('taxon', Base.metadata,
Column('id', Integer, primary_key=True),
schema='botany'
)

_foliagetype_table = Table('foliagetype', Base.metadata,
Column('id', Integer, primary_key=True),
schema='botany'
)

_plant_table = Table('plant', Base.metadata,
Column('id', Integer, primary_key=True),
Column('taxon_id', Integer),
Column('foliagetype_id', Integer),
ForeignKeyConstraint(['taxon_id'], ['botany.taxon.id']),
ForeignKeyConstraint(['foliagetype_id'], ['botany.foliagetype.id']),
schema='botany'
)

class Taxon(Base):
__table__ = _taxon_table

class Foliagetype(Base):
__table__ = _foliagetype_table

class Plant(Base):
__table__ = _plant_table

taxon = relationship('Taxon',
backref=backref('plant',
uselist=False))

foliagetype = relationship('Foliagetype',
backref=backref('plants'))

if __name__ == '__main__':

engine = create_engine('postgresql://xxx@localhost:5432/xxx')
Session = sessionmaker(bind=engine)
session = Session()

taxon_old = session.query(Taxon).get(-2147483634)
taxon_new = session.query(Taxon).get(-2147483645)

foliagetype_old = session.query(Foliagetype).get(-2147483646)
foliagetype_new = session.query(Foliagetype).get(-2147483645)

plant = session.query(Plant).get(-2147483643)

print -- Change foliagetype --
print plant.foliagetype is foliagetype_old  # True
plant.foliagetype = foliagetype_new
print plant.foliagetype is foliagetype_new  # True

print -- Change taxon --
print plant.taxon is taxon_old  # True
plant.taxon = taxon_new  # triggers an IntegrityError
print plant.taxon is taxon_new



So a plant must have one and exactly one foliagetype and same with the the 
taxon : a plant must have one and exactly one taxon. The difference 
however, is that a foliagetype can be linked to several plants while a 
taxon can only be linked to a single plant.

In my code above the change of foliagetype leads SQLAlchemy to output the 
expected SQL :

UPDATE botany.plant SET foliagetype_id= -2147483645 WHERE botany.plant.id = 
 -2147483643


However, concerning the change of taxon, I don't understand what SQLAlchemy 
is doing. It first output:

SELECT botany.plant.id AS botany_plant_id, botany.plant.taxon_id AS 
botany_plant_taxon_id, botany.plant.foliagetype_id AS 
botany_plant_foliagetype_id 
FROM botany.plant 
WHERE  -2147483634 = botany.plant.taxon_id


and then:

UPDATE botany.plant SET taxon_id=NULL WHERE botany.plant.id =  -2147483643


which fails because NULL is not allowed for taxon_id. So, why do not 
SQLAlchemy just output this instead?

UPDATE botany.plant SET taxon_id= -2147483645 WHERE botany.plant.id = 
 -2147483643


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 assuming cls.foo is a many-to-one, it will produce the correct result,
 but will be far worse in terms of memory and performance, as the
 subqueryload() call will be invoked for each distinct batch of 50 rows,
 across the *full* result set.  So if your result has 1000 rows, and the
 number of bars total is 1, you will load 1 additional rows for
 *each* yield per of 50.

Ok, I think I get it. Is there a way to make it all work without the performance
penalty of subqueryload? For example, what will happen if I replace it with
joinedload(cls.bars)?

Ladislav Lenart


On 4.6.2013 16:19, Michael Bayer wrote:
 
 On Jun 4, 2013, at 8:18 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 I have a hard time to understand the following comment for Query.yield_per():
 
  Yield only ``count`` rows at a time.

WARNING: use this method with caution; if the same instance is present
in more than one batch of rows, end-user changes to attributes will be
overwritten.

In particular, it's usually impossible to use this setting with
eagerly loaded collections (i.e. any lazy='joined' or 'subquery')
since those collections will be cleared for a new load when
encountered in a subsequent result batch.   In the case of 'subquery'
loading, the full result for all rows is fetched which generally
defeats the purpose of :meth:`~sqlalchemy.orm.query.Query.yield_per`.

Also note that many DBAPIs do not stream results, pre-buffering
all rows before making them available, including mysql-python and
psycopg2.  :meth:`~sqlalchemy.orm.query.Query.yield_per` will also
set the ``stream_results`` execution
option to ``True``, which currently is only understood by psycopg2
and causes server side cursors to be used.

 Suppose I have a code like this:

q = session.query(cls).filter(...)
q = q.options(
joinedload(cls.foo),
  subqueryload(cls.bars),
)
for each in q.yield_per(50):
# access each.foo or each.bars

 Does it work? Is so, why? If not, why?
 
 assuming cls.foo is a many-to-one, it will produce the correct result, but 
 will be far worse in terms of memory and performance, as the subqueryload() 
 call will be invoked for each distinct batch of 50 rows, across the *full* 
 result set.  So if your result has 1000 rows, and the number of bars total 
 is 1, you will load 1 additional rows for *each* yield per of 50.
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] nested inheritance / polymorphic relationships

2013-06-04 Thread Amir Elaguizy
I didn't intend to use the word reflected in the Programming sense, I meant
in the traditional sense: is represented by.

That sentence was confusing, sorry!

I was saying class B,  C, and D are all defined using that same pattern.
They each have their own class.

Amir

On Tuesday, June 4, 2013, Michael Bayer wrote:


 On Jun 4, 2013, at 1:55 AM, Amir Elaguizy aelag...@gmail.comjavascript:;
 wrote:

  Hi there,
 
  I have a tree that looks like this, reflected via polymorphic
 inheritance:

 what do we mean reflected here, are you reflecting tables from the
 database, that is,
 http://docs.sqlalchemy.org/en/rel_0_8/core/schema.html#metadata-reflection?

 
  That works great, like:
 
 
  class BaseModel(db.Model): # Table A in diagram
  __tablename__ = entities
 
  id = db.Column(db.BigInteger, primary_key=True, nullable=False,
 server_default=func.nextval('guid_seq'))
  type_id = db.Column(db.SmallInteger,
 db.ForeignKey(EntityTypesModel.id))
 
  __mapper_args__ = {
  'polymorphic_identity':'entity',
  'polymorphic_on':type_id,
  'with_polymorphic':'*'
  }

 this is why I question the word reflected because I don't see you using
 reflection there.

 
  class BrandModel(BaseModel):   # Table B, C, D in diagram
  __tablename__ = 'brands'
 
  id = db.Column(db.BigInteger, db.ForeignKey(StufffModel.id),
 primary_key=True, nullable=False)
  name = db.Column(db.String, nullable=False)
 
  __mapper_args__ = {
  'polymorphic_identity':ET_BRAND,
  }

 Im confused by this as well - are you saying that you map the same class
 to B, C, and D rows?   That would be unusual.  It wouldn't work at all on
 the persistence side as SQLAlchemy could not know which of B, C, or D you
 wish for a particular BrandModel to be persisted towards.


 
 
  The problem is I need to reflect something more like this:
 
   A
/   |   \
  B   C   D
   /   \
 EF
 
  Where D is not only a polymorphic child of A but also the polymorphic
 parents of E  F.
 
  It seems like I have to choose, D can either be a polymorphic child or
 it can be a parent - it can't be both.
 
  Do I have any options here?

 SQLAlchemy can represent inheritance hierarchies of any depth.However,
 because you are assigning a single subclass to all of B, C, and D that
 might be why there's an issue here, you'd need to assign a distinct
 subclass of BaseModel to at least D, and then another subclass of D_Model
 to handle E and F.

 Preferably, you'd produce distinct classes for all six tables.



 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/pI62wMDb6M4/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com javascript:;.
 To post to this group, send email to sqlalchemy@googlegroups.comjavascript:;
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] EXTRACT() not working as expected in PostgreSQL with TIMESTAMP WITH TIMEZONE

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 4:53 AM, Martijn van Oosterhout klep...@gmail.com wrote:

 I have just upgraded sqlalchemy and am running some tests on some old code 
 and getting some very strange results.
 
 I have a table with a column defined as TIMESTAMP WITH TIMEZONE:
 
 test_table = Table('test', metadata,
 Column('id', Integer,  primary_key=True),
 Column('data', UnicodeText, primary_key=True),
 Column('ts', DateTime(timezone=True), nullable=False),
 )
 
 And I then have a query which uses it like so (TestTable is the ORM mapping 
 of test_table):
 
 session.query(extract('epoch', database.TestTable.ts))
 
 and this is producing output like:
 
 SELECT EXTRACT(epoch FROM test_table.ts :: timestamp);
 
 That cast is a problem, since casting a timestamptz to a timestamp removes 
 the timestamp info, which means you get a different answer, since it's now 
 interpreted as UTC, which means the answer is (in my case) two hours off:
 
 # select extract(epoch from '2013-06-02 11:23:45+02'::timestamptz);
  date_part  
 
  1370165025
 (1 row)
 
 # select extract(epoch from '2013-06-02 11:23:45+02'::timestamptz::timestamp);
  date_part  
 
  1370172225
 (1 row)
 
 This cast seems to have been introduced in response to 
 http://www.sqlalchemy.org/trac/ticket/1647 which seems to completely ignore 
 the WITH TIMEZONE case. I tested it on 0.5.4, but on 0.7.6 it's definitely 
 broken. What I don't understand is why the cast is there at all. Unless the 
 argument is text postgresql can work out the type by itself just fine.

Note that postgresql is not the only actor needed here, the DBAPI needs to 
interpret a literal Python interval type into an appropriate string format, 
which modern versions of psycopg2 can do just fine.

The cast here is much older than that, that ticket only refers to some 
adjustments to not render the cast in some cases.  It was there when the 
EXTRACT feature was first added, as you can see here:

http://www.sqlalchemy.org/trac/changeset/aca84bebb091a51ceeb#file4

So unfortunately I don't have a clear picture of the rationale for this cast, 
the best guess would be that old versions of psycopg2 around 2009 were unaware 
of the Python interval type.   I just tested with 2010's 2.2.2 and that one 
seems to know about intervals.

This certainly should be removed (http://www.sqlalchemy.org/trac/ticket/2740) 
but I believe it has to be in 0.9 only as 0.8 is in maintenance mode.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Changing a value linked with a one-to-one relationship

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 10:38 AM, Etienne Rouxel rouxel.etie...@gmail.com wrote:

 
 and then:
 
 UPDATE botany.plant SET taxon_id=NULL WHERE botany.plant.id =  -2147483643
 
 which fails because NULL is not allowed for taxon_id. So, why do not 
 SQLAlchemy just output this instead?
 
 UPDATE botany.plant SET taxon_id= -2147483645 WHERE botany.plant.id =  
 -2147483643

if you look in your stack trace, you'll see the word autoflush (I adapted 
your excerpt into a full test case to reproduce):

Traceback (most recent call last):
  File test.py, line 74, in module
plant.taxon = taxon_new  # triggers an IntegrityError
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py, line 
303, in __set__

...

 File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py, line 
824, in fire_replace_event
value = fn(state, value, previous, initiator or self)
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py, line 
1131, in emit_backref_from_scalar_set_event
passive=PASSIVE_NO_FETCH)
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py, line 
638, in append

... 

 File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py, line 
585, in _emit_lazyload
result = q.all()
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py, line 2219, 
in all
return list(self)
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py, line 2330, 
in __iter__
self.session._autoflush()
  File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py, line 
1139, in _autoflush
self.flush()


you can see that the ORM is not getting as far as taxon_new being fully 
assigned.   The assignment is resulting in the backref firing off and leading 
to an assignment on the Taxon.plant side.The assignment then tries to load 
the existing value for Taxon.plant, which results in an autoflush, which 
results in the integrity error.

There seems to be some unfortunate effect going on, as the issue of backrefs 
loading collections like this was long ago resolved to defer that activity, but 
it appears that when uselist=False is present, this logic no longer takes 
place.  Ticket http://www.sqlalchemy.org/trac/ticket/2741 has been added to 
look into this as this should not be happening.

Anytime a stack trace has autoflush in it where you don't want autoflush to 
occur, defer autoflush for those operations like this:

with session.no_autoflush:
plant.taxon = taxon_new
session.flush()  # optional, emits the UPDATE you're looking for sooner rather 
than later
print plant.taxon is taxon_new




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 10:45 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.
 
 assuming cls.foo is a many-to-one, it will produce the correct result,
 but will be far worse in terms of memory and performance, as the
 subqueryload() call will be invoked for each distinct batch of 50 rows,
 across the *full* result set.  So if your result has 1000 rows, and the
 number of bars total is 1, you will load 1 additional rows for
 *each* yield per of 50.
 
 Ok, I think I get it. Is there a way to make it all work without the 
 performance
 penalty of subqueryload? For example, what will happen if I replace it with
 joinedload(cls.bars)?

You will then get the wrong results.   The docstring tries to explain this - a 
joinedload uses a JOIN.  For each cls instance, there are many rows, one for 
each bar.  If you cut off the results in the middle of populating that 
collection, the collection is incomplete, you'll see the wrong collection on 
your cls.bars.  On the next load, cls.bars will be wiped out and populated with 
the remaining bar objects.

Don't use yield_per.  Use windowing instead, see 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 You will then get the wrong results.   The docstring tries to explain this -
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one
 for each bar.  If you cut off the results in the middle of populating that
 collection, the collection is incomplete, you'll see the wrong collection on
 your cls.bars.  On the next load, cls.bars will be wiped out and populated
 with the remaining bar objects.

Ok, I think I understand this too.

I've tried WindowedRangeQuery. It looked promising at first but it is (much)
slower than yield_per() with all its quirks, at least for my usecase. If I
understand the WindowedRangeQuery recipe, it does a full scan of the target
table first to read all the ids and calculate the bounds of all the windows. I
don't want to it like this. I am working with relatively large datasets but it
is still far less than all rows in the table. Something like 10-5 rows from
a table with 1-2 million rows. The windowed query iterates over many completely
empty windows.

Can I modify the recipe so it preserves the filtering and creates windows only
for the interesting subset of the table?

Thank you,

Ladislav Lenart


On 4.6.2013 17:18, Michael Bayer wrote:
 
 On Jun 4, 2013, at 10:45 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 assuming cls.foo is a many-to-one, it will produce the correct result,
 but will be far worse in terms of memory and performance, as the
 subqueryload() call will be invoked for each distinct batch of 50 rows,
 across the *full* result set.  So if your result has 1000 rows, and the
 number of bars total is 1, you will load 1 additional rows for
 *each* yield per of 50.

 Ok, I think I get it. Is there a way to make it all work without the 
 performance
 penalty of subqueryload? For example, what will happen if I replace it with
 joinedload(cls.bars)?
 
 You will then get the wrong results.   The docstring tries to explain this - 
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one 
 for each bar.  If you cut off the results in the middle of populating that 
 collection, the collection is incomplete, you'll see the wrong collection on 
 your cls.bars.  On the next load, cls.bars will be wiped out and populated 
 with the remaining bar objects.
 
 Don't use yield_per.  Use windowing instead, see 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 11:41 AM, Ladislav Lenart lenart...@volny.cz wrote:

 Hello.
 
 You will then get the wrong results.   The docstring tries to explain this -
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one
 for each bar.  If you cut off the results in the middle of populating that
 collection, the collection is incomplete, you'll see the wrong collection on
 your cls.bars.  On the next load, cls.bars will be wiped out and populated
 with the remaining bar objects.
 
 Ok, I think I understand this too.
 
 I've tried WindowedRangeQuery. It looked promising at first but it is (much)
 slower than yield_per() with all its quirks, at least for my usecase.

OK, but with yield_per() you want to use eagerloading also, so yield_per() not 
fast enough either, I guess


 If I
 understand the WindowedRangeQuery recipe, it does a full scan of the target
 table first to read all the ids and calculate the bounds of all the windows. I
 don't want to it like this. I am working with relatively large datasets but it
 is still far less than all rows in the table. Something like 10-5 rows 
 from
 a table with 1-2 million rows. The windowed query iterates over many 
 completely
 empty windows.
 
 Can I modify the recipe so it preserves the filtering and creates windows only
 for the interesting subset of the table?

Absolutely, you should do whatever you have to in order to get the range you 
want, in fact the recipe even says this:

 Enhance this yourself !  Add a where argument
so that windows of just a subset of rows can
be computed.

if your situation is even simpler than that, such as just querying from PKs 
50-1000, you could just make up your own integer ranges within those two 
endpoints.   


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
Hello.

 OK, but with yield_per() you want to use eagerloading also, so yield_per()
 not fast enough either, I guess

No. I use yield_per() on complex queries with join(), filter() and both
joinedload() and subqueryload(). It is possible that they sometimes returns
wrong results because of yield_per(). I am not sure about that, but it is
definitely much faster than the original WindowedRangeQuery recipe. I can only
speculate that postgres caches subqueryload results...


 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

Ok. What I want to do is basically the following:
* Suppose a complex query q with join(...), filter(...) and options(...).
* I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
* I will use q2 to obtain the window ranges.
* I will iterate over the window ranges and apply each to the original q and
execute it in a loop.

Can this strategy work?


Thank you,

Ladislav Lenart


On 4.6.2013 17:57, Michael Bayer wrote:
 
 On Jun 4, 2013, at 11:41 AM, Ladislav Lenart lenart...@volny.cz wrote:
 
 Hello.

 You will then get the wrong results.   The docstring tries to explain this -
 a joinedload uses a JOIN.  For each cls instance, there are many rows, one
 for each bar.  If you cut off the results in the middle of populating that
 collection, the collection is incomplete, you'll see the wrong collection on
 your cls.bars.  On the next load, cls.bars will be wiped out and populated
 with the remaining bar objects.

 Ok, I think I understand this too.

 I've tried WindowedRangeQuery. It looked promising at first but it is (much)
 slower than yield_per() with all its quirks, at least for my usecase.
 
 OK, but with yield_per() you want to use eagerloading also, so yield_per() 
 not fast enough either, I guess
 
 
 If I
 understand the WindowedRangeQuery recipe, it does a full scan of the target
 table first to read all the ids and calculate the bounds of all the windows. 
 I
 don't want to it like this. I am working with relatively large datasets but 
 it
 is still far less than all rows in the table. Something like 10-5 rows 
 from
 a table with 1-2 million rows. The windowed query iterates over many 
 completely
 empty windows.

 Can I modify the recipe so it preserves the filtering and creates windows 
 only
 for the interesting subset of the table?
 
 Absolutely, you should do whatever you have to in order to get the range you 
 want, in fact the recipe even says this:
 
  Enhance this yourself !  Add a where argument
 so that windows of just a subset of rows can
 be computed.
 
 if your situation is even simpler than that, such as just querying from PKs 
 50-1000, you could just make up your own integer ranges within those two 
 endpoints.   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Changing a value linked with a one-to-one relationship

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 11:15 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 There seems to be some unfortunate effect going on, as the issue of backrefs 
 loading collections like this was long ago resolved to defer that activity, 
 but it appears that when uselist=False is present, this logic no longer takes 
 place.  Ticket http://www.sqlalchemy.org/trac/ticket/2741 has been added to 
 look into this as this should not be happening.
 

OK I've looked into it, and it actually should be happening, because the 
difference between uselist=True and uselist=False is that when setting a 
uselist=False, the *old* value must be removed, so we have to load the old 
value in order to reflect that in the history.

So when we say:

plant.taxon = taxon_new

SQLAlchemy must load taxon.plant on taxon_new to get the previous plant 
value that's present, if any, so that it can be de-associated with taxon_new as 
taxon.plant is uselist=False.   This can't be determined with taxon_new alone 
- the database must be scanned for taxon_new's identity in the plant table.   
 So by default this will also autoflush, so here you just want to temporarily 
disable autoflush.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Michael Bayer

On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:

 
 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...
 
 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.
 
 Can this strategy work?

there's not a publicly supported feature to reset the options right now so 
you'd probably need to apply them after you get your window ranging query.  
You can probably remove the effect of eager loaders if you were to just assign 
a blank _attributes dictionary to the new query though (just no long term 
guarantees).

other than that, I use the windowing concept extensively and it works very well.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q][SA 0.7.9] yield_per() vs. joinedload() and subqueryload()

2013-06-04 Thread Ladislav Lenart
On 4.6.2013 18:49, Michael Bayer wrote:
 
 On Jun 4, 2013, at 12:28 PM, Ladislav Lenart lenart...@volny.cz wrote:
 

 Absolutely, you should do whatever you have to in order to get the range
 you want, in fact the recipe even says this...

 Ok. What I want to do is basically the following:
 * Suppose a complex query q with join(...), filter(...) and options(...).
 * I need to create q2 from q such that:
  * It has no options.
* Can I reset the options with q = q.options(None)?
  * It has select with the over by magic taken from WindowedRangeQuery recipe.
* I know I can use with_entities() for this.
 * I will use q2 to obtain the window ranges.
 * I will iterate over the window ranges and apply each to the original q and
 execute it in a loop.

 Can this strategy work?
 
 there's not a publicly supported feature to reset the options right now so 
 you'd probably need to apply them after you get your window ranging query.
   You can probably remove the effect of eager loaders if you were to just 
 assign a blank _attributes dictionary to the new query though (just no long 
 term guarantees).
 
 other than that, I use the windowing concept extensively and it works very 
 well.

Ok, I will try it.

Thank you very much for your invaluable insights,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: nested inheritance / polymorphic relationships

2013-06-04 Thread Amir Elaguizy
Just to tie this off, I ended up flattening the tree so it looks like:

 A
  /   |  \\
B   C E  F

D is now gone  and the functionality it provided is in the children (E  
F). I'll probably make the common parts a mixin or something.

Unfortunate but I couldn't spend more time on this particular issue.

On Monday, June 3, 2013 10:55:15 PM UTC-7, Amir Elaguizy wrote:

 Hi there,

 I have a tree that looks like this, reflected via polymorphic inheritance:


  A
   /   |   \
 B   C   D

 That works great, like:


 class BaseModel(db.Model): # Table A in diagram
 __tablename__ = entities

 id = db.Column(db.BigInteger, primary_key=True, nullable=False, 
 server_default=func.nextval('guid_seq'))
 type_id = db.Column(db.SmallInteger, 
 db.ForeignKey(EntityTypesModel.id))

 __mapper_args__ = {
 'polymorphic_identity':'entity',
 'polymorphic_on':type_id,
 'with_polymorphic':'*'
 }

 class BrandModel(BaseModel):   # Table B, C, D in diagram
 __tablename__ = 'brands'

 id = db.Column(db.BigInteger, db.ForeignKey(StufffModel.id), 
 primary_key=True, nullable=False)
 name = db.Column(db.String, nullable=False)

 __mapper_args__ = {
 'polymorphic_identity':ET_BRAND,
 }


 The problem is I need to reflect something more like this:

  A
   /   |   \
 B   C   D
  /   \
EF

 Where D is not only a polymorphic child of A but also the polymorphic 
 parents of E  F.

 It seems like I have to choose, D can either be a polymorphic child or it 
 can be a parent - it can't be both.

 Do I have any options here?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Query and compiled_cache

2013-06-04 Thread Claudio Freire
On Sun, Jun 2, 2013 at 9:41 PM, Claudio Freire klaussfre...@gmail.comwrote:

 So the whole thing is rolled up into the named thing I referred to also,
 so that there's no need to keep a Query object hanging around, when we say
 bake() we're really just referring to a position in the code somewhere,
 so I've updated the wiki recipe to use a named system like this:

 q = s.query(Foo).\
 filter(Foo.data == bindparam('foo')).\
 bake_as(foo, cache)
 result = q.params(foo='data 12').all()


 A highly cleaned up version of your test is attached.

 I'm still not sure I'm getting everything accounted for here!  thanks for
 testing !   The feature is actually looking quite simple and probably works
 better as something built in, or at least if we added some methods to
 QueryContext to ease the burden of caching/copying it.


 Well, if that works, it certainly covers my needs so there would be no
 pressing need to incorporate it into the core.
 I'll let you know tomorrow.



I've done only superficial testing for now, I have to build me a test
database, but I thought I'd let you know, it seems to work flawless till
now.

:^)

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Reliable way to read comments from database schema

2013-06-04 Thread Michael Bayer
There's a long standing ticket to add support for comments, at least at the DDL 
level.   I don't think anyone has looked into what level of support we get from 
the various backends as far as reflection.

So its something the library has room for, but it's an open item for now.  The 
four-year-old ticket is http://www.sqlalchemy.org/trac/ticket/1546.




On Jun 4, 2013, at 8:33 PM, Warwick Prince warwi...@mushroomsys.com wrote:

 Hi Michael
 
 I'm toying with the idea of embedding some metadata into the comments on 
 columns and/or the table comment.  Is there a way to reliably read the 
 comment from the column definition via reflection across all dialects that 
 would support a comment at a column level?   Also, can I read the comment 
 normally attached to the table definition?
 
 Thanks
 Warwick
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Reliable way to read comments from database schema

2013-06-04 Thread Mariano Mara



On 06/04/2013 10:46 PM, Michael Bayer wrote:

There's a long standing ticket to add support for comments, at least at the DDL 
level.   I don't think anyone has looked into what level of support we get from 
the various backends as far as reflection.

So its something the library has room for, but it's an open item for now.  The 
four-year-old ticket is http://www.sqlalchemy.org/trac/ticket/1546.




On Jun 4, 2013, at 8:33 PM, Warwick Prince warwi...@mushroomsys.com wrote:


Hi Michael

I'm toying with the idea of embedding some metadata into the comments on 
columns and/or the table comment.  Is there a way to reliably read the comment 
from the column definition via reflection across all dialects that would 
support a comment at a column level?   Also, can I read the comment normally 
attached to the table definition?

Thanks
Warwick



If this metadata will be used just in sqlalchemy (or python libraries up 
in the stack), you could use the info parameter when creating columns.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Reliable way to read comments from database schema

2013-06-04 Thread Warwick Prince

 On 06/04/2013 10:46 PM, Michael Bayer wrote:
 There's a long standing ticket to add support for comments, at least at the 
 DDL level.   I don't think anyone has looked into what level of support we 
 get from the various backends as far as reflection.
 
 So its something the library has room for, but it's an open item for now.  
 The four-year-old ticket is http://www.sqlalchemy.org/trac/ticket/1546.
 
 
 
 
 On Jun 4, 2013, at 8:33 PM, Warwick Prince warwi...@mushroomsys.com wrote:
 
 Hi Michael
 
 I'm toying with the idea of embedding some metadata into the comments on 
 columns and/or the table comment.  Is there a way to reliably read the 
 comment from the column definition via reflection across all dialects that 
 would support a comment at a column level?   Also, can I read the comment 
 normally attached to the table definition?
 
 Thanks
 Warwick
 
 
 If this metadata will be used just in sqlalchemy (or python libraries up in 
 the stack), you could use the info parameter when creating columns.

OK - sounds promising.  What actually is the 'info' that it reads/writes?  i.e. 
where is it getting it from/putting it in the DB? (Or does it only reside in 
the Python code?)

Also, I'd like to +1 the task to reflect the comments with support for as many 
dialects as possible :-)   We have developed a large platform using SQLA as the 
ORM, and as part of the maintenance area we have a generic database manager 
that is intended to allow maintenance, viewing, manual row insertion/deletion 
etc (Like MySQL Workbench et al) that is 100% database agnostic.  Comments are 
missing at the moment which is a shame.

 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] nested inheritance / polymorphic relationships

2013-06-04 Thread Ladislav Lenart
Hello.

 It seems like I have to choose, D can either be a polymorphic child or it can 
 be
 a parent - it can't be both.

 Do I have any options here?

I am almost sure you are correct. This is not possible in SA so you have to
flatten your hierarchy.

I don't know about experiences with inheritance of others on this list, but mine
in SA0.7.9 is not that good. I encountered various limitations along the way. I
guess it dependes on the complexity of the queries. We plan to get rid of it
eventually in our app. However, if I am not mistaken, SA0.8 addresses all the
quirks.


Ladislav Lenart


On 4.6.2013 07:55, Amir Elaguizy wrote:
 Hi there,
 
 I have a tree that looks like this, reflected via polymorphic inheritance:
 
 
  A
   /   |   \
 B   C   D
 
 That works great, like:
 
 
 class BaseModel(db.Model): # Table A in diagram
 __tablename__ = entities
 
 id = db.Column(db.BigInteger, primary_key=True, nullable=False,
 server_default=func.nextval('guid_seq'))
 type_id = db.Column(db.SmallInteger, db.ForeignKey(EntityTypesModel.id))
 
 __mapper_args__ = {
 'polymorphic_identity':'entity',
 'polymorphic_on':type_id,
 'with_polymorphic':'*'
 }
 
 class BrandModel(BaseModel):   # Table B, C, D in diagram
 __tablename__ = 'brands'
 
 id = db.Column(db.BigInteger, db.ForeignKey(StufffModel.id),
 primary_key=True, nullable=False)
 name = db.Column(db.String, nullable=False)
 
 __mapper_args__ = {
 'polymorphic_identity':ET_BRAND,
 }
 
 
 The problem is I need to reflect something more like this:
 
  A
   /   |   \
 B   C   D
  /   \
EF
 
 Where D is not only a polymorphic child of A but also the polymorphic parents 
 of
 E  F.
 
 It seems like I have to choose, D can either be a polymorphic child or it can 
 be
 a parent - it can't be both.
 
 Do I have any options here?
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.