Re: [sqlalchemy] echo_pool=True doesn't seem to have an effect
Thanks. Now I'm able to confirm that connections are being returned to the pool. On Monday, July 9, 2012 3:50:53 PM UTC+2, Michael Bayer wrote: > > don't feel bad because I had to spend 10 minutes figuring this out again, > to see checkin/checkout events you need to use echo_pool="debug". > echo_pool=True just shows major events like connection invalidations. > > > On Jul 9, 2012, at 9:25 AM, bojanb wrote: > > Hi, > > I'm trying to debug some issues with sessions in my SQLAlchemy 0.7.4 > application. However, setting echo_pool to True doesn't seem to log > anything to standard output: > > db_engine=create_engine(DB_URI, echo_pool=True) > Session = sessionmaker(bind=db_engine) > > Standard logging (echo=True) works fine of course. Any idea on what might > be wrong? > > -- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To view this discussion on the web visit > https://groups.google.com/d/msg/sqlalchemy/-/O33Kntq9gP8J. > To post to this group, send email to sqlalchemy@googlegroups.com. > To unsubscribe from this group, send email to > sqlalchemy+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/sqlalchemy?hl=en. > > > -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/hFY4x-1N9n4J. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] echo_pool=True doesn't seem to have an effect
Hi, I'm trying to debug some issues with sessions in my SQLAlchemy 0.7.4 application. However, setting echo_pool to True doesn't seem to log anything to standard output: db_engine=create_engine(DB_URI, echo_pool=True) Session = sessionmaker(bind=db_engine) Standard logging (echo=True) works fine of course. Any idea on what might be wrong? -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/O33Kntq9gP8J. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Column property vs. Python (class) property for calculated columns
> > However, I have issues with the difference in NULL value semantics > > between Python and SQL. Ie. if a calculated column is defined via a > > column_property as price*amount, then the result will be NULL if any > > of the values is NULL. However, in Python, None*something throws a > > TypeError, so the hybrid_property getter function needs to be filled > > with lots of IFs. > > When called as class properties, the descriptors always generate > SQL expressions as above. > > When called as instance properties, it just calls your function and > you can > do what you like with None values, e.g.: > > @hybrid_property > def length(self): > return self.thing is not None and self.thing or None That's what I did, I just wasn't sure it's the right way to do it. > > Also, this solution can't be used for date calculations, as timedelta > > objects are needed. So I guess I will stick with a mix of Python > > The example works in postgresql with timestamps and intervals/ > timedeltas. ;-) > > a. Then I'm doing something terribly wrong. If I define: due_date = hybrid_property(lambda self: self.invoce_date + self.payment_days) When querying with session.query(Invoice).filter (Invoice.due_date-- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Mapping an object to multiple tables
This can be done and it's not too complicated, but beware as in 95% of the time it's a deficency in your model; ie. you can refactor your model so that you don't need this. If it's the other 5% of cases, here's what the code looks like (I can't honestly remember if I read this in the docs or got it as an answer from Mike someplace, so no link to source): mapper(CompositeClass, join(ClassA, ClassB, join_condition ), properties={ 'composite_id': [table_A.c.id, table_B.c.a_id], #Be sure to mark identical columns like this, otherwise inserts won't work 'property_from_A: table_A.c.somecolumn, 'property_from_B: [table_B.c.somecolumn, table_B_superclass.c.somecolumn], #If B inherits from some class, this needs to be specified explicitly 'some_relation': relation(ClassD, primaryjoin= (table_B.c.d_id==table_D.c.id)) ... }) I hope the pseudocode isn't too confusing :-) On Jan 16, 8:41 pm, justin potts wrote: > Michael Bayer wrote: > > On Jan 16, 2010, at 1:02 AM, justin potts wrote: > > >> I realize it's not the typical case, but I would like to know if it's > >> possible to map a class to multiple tables at once, somewhat like > >> mapping to a join statement. I've read the documentation about > >> querying with joins, however this seems like a large amount of work to > >> build an object sourced from multiple tables. Any help is appreciated. > > > what scheme for representing objects along multiple tables did you have in > > mind ? this can mean any number of things. > > I'm wondering if it's possible to instantiate an object with properties > from different tables. Since it's usually a 1-1 mapping, I suppose the > scheme in this case would be many-to-1. -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Column property vs. Python (class) property for calculated columns
Thanks Mike. I must admit I don't understand why that code works, but it does. I guess that's the "Alchemy" in "SQLAlchemy" :-) However, I have issues with the difference in NULL value semantics between Python and SQL. Ie. if a calculated column is defined via a column_property as price*amount, then the result will be NULL if any of the values is NULL. However, in Python, None*something throws a TypeError, so the hybrid_property getter function needs to be filled with lots of IFs. Also, this solution can't be used for date calculations, as timedelta objects are needed. So I guess I will stick with a mix of Python properties and column_properties. On Jan 14, 4:23 pm, "Michael Bayer" wrote: > bojanb wrote: > > Let's say I want to have a simple calculated property in my class, eg. > > amount which is just qty * price. > > > I can define it as a column_property in a mapper which makes it > > available in all database operations, eg. I can write session.query > > (myclass).filter_by(amount>1000) which will create the correct WHERE > > clause "qty*price>1000". > > > However, the attribute is None until the object is flushed to the > > database, ie. > > myclass.qty = 2 > > myclass.price = 500 > > print myclass.amount > > > will return None if flush was not issued. > > > If I use a Python property function to define it, it will be > > immediately available (and always up to date); however, I cannot query > > on a Python property. > > ultimately the value of this attribute is derived from other attributes > which are mapped. So there is a very simple and clever way to get both > in that case which you can see if you look at > examples/derived_attributes/attributes.py. > > > > > Is there a way to have best of both worlds? Or should I just define > > the calculated property twice, eg. Python property named 'amount' and > > a column_property named 'db_amount' and then work with the first but > > use the second for querying? > > -- > > You received this message because you are subscribed to the Google Groups > > "sqlalchemy" group. > > To post to this group, send email to sqlalch...@googlegroups.com. > > To unsubscribe from this group, send email to > > sqlalchemy+unsubscr...@googlegroups.com. > > For more options, visit this group at > >http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Column property vs. Python (class) property for calculated columns
Let's say I want to have a simple calculated property in my class, eg. amount which is just qty * price. I can define it as a column_property in a mapper which makes it available in all database operations, eg. I can write session.query (myclass).filter_by(amount>1000) which will create the correct WHERE clause "qty*price>1000". However, the attribute is None until the object is flushed to the database, ie. myclass.qty = 2 myclass.price = 500 print myclass.amount will return None if flush was not issued. If I use a Python property function to define it, it will be immediately available (and always up to date); however, I cannot query on a Python property. Is there a way to have best of both worlds? Or should I just define the calculated property twice, eg. Python property named 'amount' and a column_property named 'db_amount' and then work with the first but use the second for querying? -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Identical column names in parent and child classes with joined-table inheritance
This does it. One small drawback is that since the field is now defined as an attribute, one can't query on it (ie. session.query (class_).filter_by(modified_by='jack')), but we don't envison such a use case for this funcionality so it's OK for us. Recap of what was done: table columns were defined as 'modified', but these columns were renamed in mappers as '_modified'. Then, a 'modified' attribute was added to all classes (well, to the superclass to that effect) that wraps the value of 'tablename_modified'. Phew! On Nov 12, 5:06 pm, "Michael Bayer" wrote: > xaotuk wrote: > > > We have tried suggested, but: field 'modified' exists in both parent > > and child tables, when we redefined property 'modified' in mapper with > > something like this: > > mapper(Child, child_table, properties={'modified' = > > child_table.c.modified, ...}), modified field still returned value > > from parent's table. > > here is an example illustrating how to move "modified" away as an > attribute name within each mapped class, allowing them to be accessible > separately despite the tables having the same column names. If you want > to set the parent's "modified" column on the child, use the > "parent_modified" attribute: > > from sqlalchemy import * > from sqlalchemy.orm import * > > engine = create_engine("sqlite://", echo=True) > > m = MetaData() > > parent = Table('parent', m, > Column('id', Integer, primary_key=True), > Column('type', String), > Column('modified', String) > ) > > child = Table('child', m, > Column('id', Integer, ForeignKey('parent.id'), > primary_key=True), > Column('modified', String) > ) > m.create_all(engine) > > class Parent(object): > def __init__(self, modified): > self.modified = modified > > def modified(self): > return self.parent_modified > > def _set_modified(self, m): > self.parent_modified = m > > modified = property(modified, _set_modified) > > class Child(Parent): > def modified(self): > return self.child_modified > > def _set_modified(self, m): > self.child_modified = m > > modified = property(modified, _set_modified) > > mapper(Parent, parent, polymorphic_on=parent.c.type, > polymorphic_identity='parent', properties={ > 'parent_modified':parent.c.modified}) > > mapper(Child, child, inherits=Parent, polymorphic_identity='child', > properties={ > 'child_modified':child.c.modified > > }) > > s = sessionmaker(engine)() > > p1 = Parent('p1') > c1 = Child('c1') > assert p1.modified == 'p1' > assert c1.modified == 'c1' > > s.add_all([p1, c1]) > s.commit() > assert p1.modified == 'p1' > assert c1.modified == 'c1' > > > > > We also tried to add property to class like this: > > > class Parent(object) : > > table = None > > ... > > def _fieldFromQuery(self, field): > > if not self.table: > > return None > > return Session.object_session(self).execute(select([getattr > > (self.table.c, field)]).where(self.table.c.id==self.id)).scalar() > > > def _modified(self): > > return self._fieldFromQuery("modified") > > > def _modified_by(self): > > return self._fieldFromQuery("modified_by") > > > modified = property(_modified) > > modified_by = property(_modified_by) > > > where self.table is assigned just before mapper is created. > > Mapper itself again redefined attribute 'modified' and 'modified_by' > > so we were back to starting problem. > > > On Oct 29, 3:56 pm, "Michael Bayer" wrote: > >>bojanbwrote: > > >> > Hi, > > >> > Can I have identical column names in both parent and child classes > >> > that are part of a joined-table inheritance? These are simply created, > >> > created_by, modified, modified_by columns that are populated by > >> > defaults defined for them (ie. default, server_default, onupdate). > > >> > The values are written to the database correctly, but I have a problem > >> > reading them because parent's column values override child values. So, > >> > if the parent has null values for modified, modified_by and the child > >> > some actual values, ORM returns None for child.modified and > >> > child.modified_by. > > >> use the "properties" dictionary to mapper to redefine the names. or the > >> declarative equivalent. > >> seehttp://www.sqlalchemy.org/docs/05/mappers.html#customizing-column-pro... > > >> > Suggestions? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Query on a related object's field
What is the easiest way of getting the equivalent of this: session.query(Someclass).filter_by(related_obj.field=somevalue) Ie. I want to filter by a field of an object that is in relation to objects of Someclass. My original idea was to add related_obj.field as a new relation in the mapper for Someclass but I just couldn't accomplish that. Is there an easier way? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: A problem with SQLA'a referential integrity behavior, deferred constraints and PostgreSQL
Yes, the passive_deletes='all' solves this, the trick is to put it in the backref (I was putting it in the forward relation). On Oct 29, 8:29 pm, "Michael Bayer" wrote: > Michael Bayer wrote: > > >bojanbwrote: > > >> On Oct 29, 5:32 pm, "Michael Bayer" wrote: > > >>> how would the UOW "honor" RESTRICT ? if you tell it to delete a > >>> parent, > >>> and you didn't tell it to delete a child, and you have a non-nullable > >>> FK > >>> or RESTRICT, you'd expect it tothrow an error, right ? isn't that > >>> what happens here ? > > >> Well I don't know if we understand each other. If I have a ForeignKey > >> defined with ondelete='RESTRICT', and I delete the parent, no errors > >> are thrown. > > >> This only becomes apperent when you have a foreign key that is NOT > >> defined as non-nullable. Even if you define it to be non-nullable, the > >> error thrown when you try to delete the parent is not "update or > >> delete on table violates foreign key constraint", but > >> "null value in column violates not-null constraint". > > >> I've created a ticket (http://www.sqlalchemy.org/trac/ticket/1594) > >> with more details and a test case to make it clearer. > > > thanks for this. > > correcting myself, the feature is already present (and is documented): > > mapper(Person, persons, properties={ > 'country': relation(Country, backref=backref('persons', > passive_deletes='all'), passive_updates=True)}) > > :param passive_deletes=False: > Indicates loading behavior during delete operations. > > A value of True indicates that unloaded child items should not > be loaded during a delete operation on the parent. Normally, > when a parent item is deleted, all child items are loaded so > that they can either be marked as deleted, or have their > foreign key to the parent set to NULL. Marking this flag as > True usually implies an ON DELETE rule is in > place which will handle updating/deleting child rows on the > database side. > > Additionally, setting the flag to the string value 'all' will > disable the "nulling out" of the child foreign keys, when there > is no delete or delete-orphan cascade enabled. This is > typically used when a triggering or error raise scenario is in > place on the database side. Note that the foreign key > attributes on in-session child objects will not be changed > after a flush occurs so this is a very special use-case > setting. > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: A problem with SQLA'a referential integrity behavior, deferred constraints and PostgreSQL
On Oct 29, 5:32 pm, "Michael Bayer" wrote: > how would the UOW "honor" RESTRICT ? if you tell it to delete a parent, > and you didn't tell it to delete a child, and you have a non-nullable FK > or RESTRICT, you'd expect it tothrow an error, right ? isn't that > what happens here ? Well I don't know if we understand each other. If I have a ForeignKey defined with ondelete='RESTRICT', and I delete the parent, no errors are thrown. This only becomes apperent when you have a foreign key that is NOT defined as non-nullable. Even if you define it to be non-nullable, the error thrown when you try to delete the parent is not "update or delete on table violates foreign key constraint", but "null value in column violates not-null constraint". I've created a ticket (http://www.sqlalchemy.org/trac/ticket/1594) with more details and a test case to make it clearer. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: A problem with SQLA'a referential integrity behavior, deferred constraints and PostgreSQL
> SQLAlchemy's unit of work attempts to set it to > None during the flush. Doesn't that imply that ondelete='RESTRICT' is not honored by UoW? This leads to completely opposite behaviour between deleting via ORM (ie. session.delete) and deleting via SQL (ie. connection.execute (table.delete)). In the first case referencing columns are NULLed; in the second an error is thrown. Wouldn't it be more logical if both were the same, ie. UoW throwing IntegrityError? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Identical column names in parent and child classes with joined-table inheritance
Hi, Can I have identical column names in both parent and child classes that are part of a joined-table inheritance? These are simply created, created_by, modified, modified_by columns that are populated by defaults defined for them (ie. default, server_default, onupdate). The values are written to the database correctly, but I have a problem reading them because parent's column values override child values. So, if the parent has null values for modified, modified_by and the child some actual values, ORM returns None for child.modified and child.modified_by. Suggestions? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: A problem with SQLA'a referential integrity behavior, deferred constraints and PostgreSQL
If I understood correctly, you suggest adding @validates method for the column that I don't want to be null on commit? That doesn't work since it wouldn't allow me to store objects with null attributes in the current transaction. In fact, @validates is even more restrictive than a non-null constraint, since the latter lets the object attribute be null all the way until flush. What I need is a deferred constraint, ie. a constraint that can be violated in a flush, but never in a commit. I can have that by doing a check before each commit (since that's an operation that is performed explicitly in the application), but ondelete='RESTRICT' is not honored in such a setup. On Oct 26, 4:13 pm, "Michael Bayer" wrote: > bojanb wrote: > > > I have columns in my database that logically shouldn't be null. > > However, I allow them to be null while the user is editing (creating) > > objects interactively. As this is done in a transaction, and values > > are checked before a commit, it's assured that nobody else ever sees > > invalid values and the integrity of the data is preserved. > > > This would be an ideal use case for deferred constraints; > > unfortunately, Postgres doesn't support deferred CHECK constraints so > > the validation is done in code. This in turn means that the fields are > > not declared as "nullable=False". > > > This leads to a problem when referential integrity needs to be > > preserved. When a referred object is deleted, SQLA's default behavior > > is to set child's object foreign key field to None (e.g. see > >http://groups.google.com/group/sqlalchemy/browse_thread/thread/1f9990...). > > This in turn lets the RDBMS delete the referred object, instead of > > throwing referential integrity exception. I am left with hanging child > > objects even though I specified ondelete='RESTRICT' for the relation. > > > Is there any way around this? I've tried passive_deletes='all' but it > > didn't do the trick... > > why not just add a @validates method to your class, set for the foreign > key attribute, which ensures that the incoming value is not "None" ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] A problem with SQLA'a referential integrity behavior, deferred constraints and PostgreSQL
I have columns in my database that logically shouldn't be null. However, I allow them to be null while the user is editing (creating) objects interactively. As this is done in a transaction, and values are checked before a commit, it's assured that nobody else ever sees invalid values and the integrity of the data is preserved. This would be an ideal use case for deferred constraints; unfortunately, Postgres doesn't support deferred CHECK constraints so the validation is done in code. This in turn means that the fields are not declared as "nullable=False". This leads to a problem when referential integrity needs to be preserved. When a referred object is deleted, SQLA's default behavior is to set child's object foreign key field to None (e.g. see http://groups.google.com/group/sqlalchemy/browse_thread/thread/1f9990e3a9a2a869). This in turn lets the RDBMS delete the referred object, instead of throwing referential integrity exception. I am left with hanging child objects even though I specified ondelete='RESTRICT' for the relation. Is there any way around this? I've tried passive_deletes='all' but it didn't do the trick... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Can I use MapperExtension.before_insert to prevent an object from being INSERTed on commit()?
> it does not. > > http://www.sqlalchemy.org/docs/05/reference/orm/interfaces.html?highl... > > "Returning EXT_STOP will halt processing of further extensions handling > that method." > > that only refers to additional extensions. Right. The doc was a little ambigous - the "or use the default functionality if there are no other extensions" for EXT_CONTINUE kinda led me to believe that the default functionality (which is to insert the object in the database I guess) is just treated as the last extension. > turn off cascade on those backrefs: > > 'foo':relation(Bar, backref=backref('foos', cascade=None)) I figured out that session.expunge(new_object) before doing a commit() does what I need - keep the incomplete object from being commited while saving other new objects. I kept trying to do session.new.remove (new_object) but of course that didn't work, which led me to play with the more exotic solutions... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Can I use MapperExtension.before_insert to prevent an object from being INSERTed on commit()?
I was under the impression that returning EXT_STOP in my MapperExtension.before_insert() can prevent an object from being inserted into the database altogether, but that doesn't seem to be working, so I'm not sure if I'm misunderstanding the operation of MapperExtensions or it's a bug. I'd like to have objects with (non-mapped) attribute "temporary" set to True not be written to the database, because they haven't been fully initialized yet (e.g. some of their fields would violate integrity constraints). And before you say "simply don't add them to the session until they have been initialized" - they get added automatically because of backrefs. I can attach a test case if further clarification is needed. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Pre-commit validation spanning multiple tables/ORM classes
> meaning, you set A.a and you can't depend on A.b being correct yet ? > Well sure. How would you have it done ? Something has to trigger the > "validate" event at some point. So if you need to wait for all of A.a, > A.b, A.c, etc. to be setup first, then sure you'd throw your validation > into before_flush() or mapper extension before_insert()/before_update() - > or just tailor your classes' interface as needed, such as A.set_values(a, > b, c). Personally I opt for the latter since its simple and produces an > immediate validation effect rather than waiting for a flush. What I meant to say is that validators don't fit well with something that I'm attempting to do. I'll probably implement a little validation "framework" that will do validations just before a flush. I'll be sure to post it here if its interesting enough! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Pre-commit validation spanning multiple tables/ORM classes
My issue with SQLA validators is that they don't allow inconsistent state even on fields of a single object, which makes multi-field validation impossible. Eg. imagine you have fields tax_id and country_code on a customer object. For country code 'us', tax_id should be 9 digits long; for country code 'ca', tax_id should be 15 digits long . If you create appropriate validators on both fields, you will never be able to change either the tax_id or country_code of any customer object. The solution would be to allow inconsistent objects but do validation just before commit. I've been planning on implementing such validation for a project I'm working on, so any suggestions are welcome. On Sep 22, 5:30 pm, Randall Nortman wrote: > In my application, I have a set of tables that model parts of what are > conceptually composite objects, and I need to validate that the state > of the objects is coherent before committing them to the database. In > the course of building up the network of objects, the state may be > temporarily inconsistent (because it will, in general, be impossible > to maintain consistency at every step of the process), but I want to > make sure it is consistent by the time it hits the database. I think > the answer is to make a SessionExtension and use before_commit(), but > I still have some questions about exactly how that works. > > To make the question concrete, I have a one-to-many relationship, and > the relevant state is the state of the parent+children composite. Any > change to a child's attributes needs to trigger re-validation of the > parent, and obviously any change in membership in the collection of > children needs to trigger revalidation. In particular, if a child > moves from one parent to another, then *both* parents must be re- > validated before the transaction is committed. All this validation > needs to occur even though the parent table is not modified in any of > those cases. And I think I will likely want this to work also in a > many-to-many relationship, where any change to the association table > should trigger validation of all related (or newly unrelated) objects. > > Furthermore, I want to work with these objects as individual Parent > and Child objects, not a single ParentWithChildren object. Or at a > minimum, I want to be able to pass around and modify Child objects on > their own; if I get the Children every time I ask for the Parent, > that's fine. > > The @validates decorator is largely useless for this purpose, as it > validates a particular attribute of a particular class, and it gets > called at the wrong time, and in the case of collections, only gets > called on append events, not remove events (afaict). > > So if I do this with a SessionExtension.before_commit(), I would have > to iterate through the new, dirty, and deleted instances lists, > inspect the type of each instance, and do whatever is required. I am > not sure, though, how to handle the case of a change in membership in > the parent/child relationship -- the child instance that is present in > the dirty list will have only the new parent on it -- how do I find > out what the old parent was, so I can validate it? If a flush has > already occurred, the old value is already lost in the context of the > current transaction, and I think that if I open a new transaction > inside a before_commit() validator I'm just asking for trouble. Do I > need to instrument the Child class with a descriptor that tracks > changes to the parent and remembers the old parent? Or can I set the > cascade option in such a way that the old parent will end up in the > dirty list, even though there are no changes to its underlying table, > and in fact it may never have been explicitly loaded into the > session? (I must admit to be somewhat unsure of what the different > cascade options do -- but they don't seem to be useful for tracking > something like this.) > > And lastly, what do I do inside before_commit() if I want to prevent > the commit from proceeding? Do I just raise an exception? Any > particular type of exception, or is it my choice? > > Sorry for the long question, and thanks for any assistance, > > Randall --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
> the solution is the same as that I illustrated in a previous email, that > when you map to a JOIN you must place all "equivalent" columns which you > would like populated identically in groups. This is described > athttp://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-agains... You're right. Sorry, it just wasn't clear to me that you need to specify the equivalent columns throughout the whole inheritance hierarchy. I thoguht that SQLA would somehow automagically know that they are "equivalent". Thanks a lot for your effort Mike, this was really of great help! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
> I don't have the time most of today to get into it so I can't confirm > what's going on. Any chance you could map to a straight join of all four > tables instead of a join to two sub-joins ? I tried the following join in the mapper for Subordinate: join(Employee, Person).join(Relation, onclause=Person.id==Relation.person_from_id).join(SupervisorRelation) I also tried using single-table inheritance in order to avoid sub- joins. The results were the same - trying to select from a non-existing sequnce. This time it's the sequence for the other object, so I guess both are firing wrong, the exact error depends on which one is firing first. I could use MapperExtension.before_insert() as you suggested. However, since rows are inserted first in parent table then in child table, I would need to know that next primary key value and set the id to it. But selecting from a sequnce causes it to increment, and the parent and child primary keys will fail to be in sync. I would need to somehow set the primary key after the parent object has been inserted but before the child is inserted, and I don't know how to accomplish that. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
> with_polymorphic can be set against any subset of classes, not just '*'. Yes, but in the first case I can't use with_polymorphic() on the query, because the query class is not the problem - I want the polymorphic load on an attribute (relation) of the queried class in order for the eagerload to work. Therefore I must set with_polymorphic in the mapper of that other class. But since mappers are global for the whole application, I can't just set it to a subset of the classes (then some other queries in the application won't work correctly). > I don't have the time most of today to get into it so I can't confirm > what's going on. Any chance you could map to a straight join of all four > tables instead of a join to two sub-joins ? I'll try using single-table inheritance first, then I'll try that. It would be great if you could look up into this when you have time. Just run the last code example and SQL echo and error message should be self-explanatory. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
Yes, I want to map to a join between two classes which are parts of joined table inheritance. I don't think it's complex - it fits very naturally with the problem I am modeling. When I said it's efficient, I meant that the generated SQL is optimal, ie. the same as I would write if I were doing it by hand. "eagerload" and "with_polymorphic" result in SQL that also queries on fields of sibling classes (ie. which inherit from the same superclass but are not in the inheritance path of the final class) which are unnecessary, and contains a subquery (which I believe is not optimal). My understanding is that performing joins on indexed fields is what RDBMS do well. However, if the query turned out to be too slow I can always switch to single-table inheritance - whether I use joined-table or single-table inheritance is just an implementation detail (as I understand it). The problem is not that sequence is not firing off, it's that it's firing for a sequence that doesn't exist. In the code above, it's trying to get the next value from "supervisor_relations_id" sequence, but that sequence doesn't exist because of inheritance. It should be trying to get from "relations_id_sequence" but for some reason it isn't. If you run the code you can see what's going on exactly in the SQL echo. I will play around with MapperExtension and single-table inheritance and see what I get. However, I just thought that since selects and updates work so nicely in this setup, create should also work in the same way. On Sep 15, 4:32 pm, "Michael Bayer" wrote: > bojanb wrote: > > > The problem is when I have an object mapped against two tables, both > > of which are part of an inheritance hierarchy. I managed to > > synchronize the foreign key with the primary key (per the > > documentation link you provided). However, SQLAlchemy doesn't (or I > > can't instruct it how to) set the polymorphic discrimintaor fields > > appropriately. I can set them manually, but then insert fails because > > it looks for the sequence object on the inherited table, which doesn't > > exist (it exist only on the root table of the inheritance hierarchy). > > > Here's example code. In brief, I have a Person->Employee and Relation- > >>SupervisorRelation as two independent inheritance hierarchies. > > Relation is defined between two Persons, and SupervisorRelation > > between two Employees. I want to hide this as an implementation and > > have a Subordinate class that the programmer down the line will work > > with. Subordinate contains fields from Employee and > > SupervisorRelation. Querying on Subordinate works (efficiently, too), > > and so does attribute modification. I would like to be able to create > > it also (after populating the required fields and commit, the > > underlying engine should create both a new Employee and a new > > SupervisorRelation). > > let me get this straight. you want to map to a JOIN, which itself is > JOINed against two joined-table inheritance subclasses. > > That is > > +--- join --+ > | | > join join > > and each call to Query() would emit a JOIN against two sub-JOINs. > > is this correct ? is there a reason this need be so complex ? (and its > efficient ? really ? a query like that would bring any DB to a halt on a > large dataset, I would think...) > > If the issue is just a sequence not firing off, an immediate workaround > would be to fire the sequence off yourself. you can even do this in a > MapperExtension.before_insert() (good place for your polymorphic identity > setting too). I don't as yet understand why the normal sequence firing > wouldn't be working here, is one firing off and the other not ? > > > > > I hope this makes sense. Here's the code. When run, it throws > > "ProgrammingError: (ProgrammingError) relation > > "supervisor_relations_id_seq" does not exist" > > > > > > from sqlalchemy import create_engine, Table, Column, Integer, String, > > MetaData, ForeignKey > > from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload, > > join > > from sqlalchemy.orm.mapper import validates > > > DB_URI='postgres://postg...@localhost/postgres' #Replace this > > accordingly > > db_engine=create_engine(DB_URI, echo=False) > > metadata = MetaData() > > > class Person(object): > > > def __init__(self, name): > > self.name = name > > > persons = Table('persons', > > metadata, > > Column('id',
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
The problem is when I have an object mapped against two tables, both of which are part of an inheritance hierarchy. I managed to synchronize the foreign key with the primary key (per the documentation link you provided). However, SQLAlchemy doesn't (or I can't instruct it how to) set the polymorphic discrimintaor fields appropriately. I can set them manually, but then insert fails because it looks for the sequence object on the inherited table, which doesn't exist (it exist only on the root table of the inheritance hierarchy). Here's example code. In brief, I have a Person->Employee and Relation- >SupervisorRelation as two independent inheritance hierarchies. Relation is defined between two Persons, and SupervisorRelation between two Employees. I want to hide this as an implementation and have a Subordinate class that the programmer down the line will work with. Subordinate contains fields from Employee and SupervisorRelation. Querying on Subordinate works (efficiently, too), and so does attribute modification. I would like to be able to create it also (after populating the required fields and commit, the underlying engine should create both a new Employee and a new SupervisorRelation). I hope this makes sense. Here's the code. When run, it throws "ProgrammingError: (ProgrammingError) relation "supervisor_relations_id_seq" does not exist" from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload, join from sqlalchemy.orm.mapper import validates DB_URI='postgres://postg...@localhost/postgres' #Replace this accordingly db_engine=create_engine(DB_URI, echo=False) metadata = MetaData() class Person(object): def __init__(self, name): self.name = name persons = Table('persons', metadata, Column('id', Integer, primary_key=True), Column('type', String(1), nullable=False), Column('name', String(100), nullable=False)) class Employee(Person): def __init__(self, name, position): Person.__init__(self, name) self.position = position employees = Table('employees', metadata, Column('id', Integer, ForeignKey('persons.id'), primary_key=True), Column('position', String(50), nullable=False)) class Relation(object): def __init__(self, person_from, person_to): self.person_from = person_from self.person_to = person_to relations = Table('relations', metadata, Column('id', Integer, primary_key=True), Column('type', String(1), nullable=False), Column('person_from_id', Integer, ForeignKey ('persons.id'), nullable=False), Column('person_to_id', Integer, ForeignKey ('persons.id'), nullable=False)) class SupervisorRelation(Relation): def __init__(self, person_from, person_to, additional_info): Relation.__ini__(person_from, person_to) self.additional_info = additional_info supervisor_relations = Table('supervisor_relations', metadata, Column('id', Integer, ForeignKey ('relations.id'), primary_key=True), Column('additional_info', String(100), nullable=False)) class Subordinate(object): #This class represents the business object that we work with pass mapper(Person, persons, polymorphic_on=persons.c.type, polymorphic_identity='P') mapper(Employee, employees, inherits=Person, polymorphic_identity='E') mapper(Relation, relations, polymorphic_on=relations.c.type, polymorphic_identity='R', properties={ 'person_from': relation(Person, primaryjoin= (relations.c.person_from_id==persons.c.id)), 'person_to': relation(Person, primaryjoin= (relations.c.person_to_id==persons.c.id)), }) mapper(SupervisorRelation, supervisor_relations, inherits=Relation, polymorphic_identity='S') mapper(Subordinate, join(Employee, SupervisorRelation, onclause=SupervisorRelation.person_from_id==Employee.id), properties={ 'relation_id': supervisor_relations.c.id, #Need to rename as there's also Employee.id 'relation_type': relations.c.type, #Also need to rename 'person_from_id': [relations.c.person_from_id, persons.c.id], #Need to declare them synonymous 'person_from': relation(Person, primaryjoin= (relations.c.person_from_id==persons.c.id)), 'person_to': relation(Person, primaryjoin= (relations.c.person_to_id==persons.c.id)), }) if __name__ == '__main__': metadata.create_all(db_engine) s=sessionmaker(bind=db_engine)() try: jack = Employee('Jack', 'manager') s.add(jack) s.commit() #Here we try to create a Subordinate object which should automatically create dependant objects db_engine.echo = True subordinate = Subordinate() subordinate.person_to = ja
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
Actually you can't use with_polymorphic() in the query because Meeting is not an inherited object (one would get an InvalidRequestError if one tried). But plugging: with_polymorphic='*' in the mapper for Person makes the eagerload work in the code above. However, we're off on a tangent. I still don't know how to instantiate objects of a class mapped against two tables when they contain both an autogenerated primary key from the first table and a mandatory foreign key from the second... On Sep 14, 4:31 pm, "Michael Bayer" wrote: > bojanb wrote: > > > The root of the problem is inheritance. Let's say that I have a Person > > class and an Employee class that inherits from it. I also have a > > Meeting class that records meetings between two persons. > > > A query on Meeting will always lazy load Employee's attributes, > > regardless of any lazy/eagerload settings. E.g. if I want to print the > > list of names of all persons somebody had meetings with and also their > > position if they are employees (null if they're not), it will always > > be done lazily. This is bad when I have, let's say, 100.000 Meetings. > > > I guess I can build a custom join and work from that, but if I have > > two levels of inheritance on one side and three levels on the other > > side, I will have to write a six-way join, and this, I'm sure you'll > > agree, sort of defeats the purpose of an object-relational mapper. > > > Using classes mapped against multiple tables would elegantly solve > > this problem, if I could only instantiate them (see my original post). > > > Here's the code that shows attributes of inherited objects are loaded > > lazily: > > oh. you want with_polymorphic() for this. > > http://www.sqlalchemy.org/docs/05/mappers.html#controlling-which-tabl... > > > > > > > > from sqlalchemy import create_engine, Table, Column, Integer, String, > > MetaData, ForeignKey > > from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload > > from sqlalchemy.orm.mapper import validates > > > DB_URI='postgres://postg...@localhost/postgres' #Replace this > > accordingly > > db_engine=create_engine(DB_URI, echo=False) > > metadata = MetaData() > > > class Person(object): > > > def __init__(self, name): > > self.name = name > > > persons = Table('persons', > > metadata, > > Column('id', Integer, primary_key=True), > > Column('type', String(1), nullable=False), > > Column('name', String(100), nullable=False)) > > > class Employee(Person): > > > def __init__(self, name, position): > > Person.__init__(self, name) > > self.position = position > > > employees = Table('employees', > > metadata, > > Column('id', Integer, ForeignKey('persons.id'), > > primary_key=True), > > Column('position', String(50), nullable=False)) > > > class Meeting(object): > > > def __init__(self, date, person_from, person_to): > > self.date = date > > self.person_from = person_from > > self.person_to = person_to > > > meetings = Table('meetings', > > metadata, > > Column('id', Integer, primary_key=True), > > Column('date', String(8), nullable=False), > > Column('person_from_id', Integer, ForeignKey > > ('persons.id'), nullable=False), > > Column('person_to_id', Integer, ForeignKey > > ('persons.id'), nullable=False)) > > > mapper(Person, persons, polymorphic_on=persons.c.type, > > polymorphic_identity='P') > > mapper(Employee, employees, inherits=Person, polymorphic_identity='E') > > mapper(Meeting, meetings, properties={ > > 'person_from': relation(Person, primaryjoin= > > (meetings.c.person_from_id==persons.c.id)), > > 'person_to': relation(Person, primaryjoin= > > (meetings.c.person_to_id==persons.c.id)), > > }) > > > if __name__ == '__main__': > > metadata.create_all(db_engine) > > s=sessionmaker(bind=db_engine)() > > try: > > john = Person('John') > > peter = Employee('Peter', 'clerk') > > jack = Employee('Jac
[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?
The root of the problem is inheritance. Let's say that I have a Person class and an Employee class that inherits from it. I also have a Meeting class that records meetings between two persons. A query on Meeting will always lazy load Employee's attributes, regardless of any lazy/eagerload settings. E.g. if I want to print the list of names of all persons somebody had meetings with and also their position if they are employees (null if they're not), it will always be done lazily. This is bad when I have, let's say, 100.000 Meetings. I guess I can build a custom join and work from that, but if I have two levels of inheritance on one side and three levels on the other side, I will have to write a six-way join, and this, I'm sure you'll agree, sort of defeats the purpose of an object-relational mapper. Using classes mapped against multiple tables would elegantly solve this problem, if I could only instantiate them (see my original post). Here's the code that shows attributes of inherited objects are loaded lazily: from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload from sqlalchemy.orm.mapper import validates DB_URI='postgres://postg...@localhost/postgres' #Replace this accordingly db_engine=create_engine(DB_URI, echo=False) metadata = MetaData() class Person(object): def __init__(self, name): self.name = name persons = Table('persons', metadata, Column('id', Integer, primary_key=True), Column('type', String(1), nullable=False), Column('name', String(100), nullable=False)) class Employee(Person): def __init__(self, name, position): Person.__init__(self, name) self.position = position employees = Table('employees', metadata, Column('id', Integer, ForeignKey('persons.id'), primary_key=True), Column('position', String(50), nullable=False)) class Meeting(object): def __init__(self, date, person_from, person_to): self.date = date self.person_from = person_from self.person_to = person_to meetings = Table('meetings', metadata, Column('id', Integer, primary_key=True), Column('date', String(8), nullable=False), Column('person_from_id', Integer, ForeignKey ('persons.id'), nullable=False), Column('person_to_id', Integer, ForeignKey ('persons.id'), nullable=False)) mapper(Person, persons, polymorphic_on=persons.c.type, polymorphic_identity='P') mapper(Employee, employees, inherits=Person, polymorphic_identity='E') mapper(Meeting, meetings, properties={ 'person_from': relation(Person, primaryjoin= (meetings.c.person_from_id==persons.c.id)), 'person_to': relation(Person, primaryjoin= (meetings.c.person_to_id==persons.c.id)), }) if __name__ == '__main__': metadata.create_all(db_engine) s=sessionmaker(bind=db_engine)() try: john = Person('John') peter = Employee('Peter', 'clerk') jack = Employee('Jack', 'manager') m1 = Meeting('20090914', peter, john) m2 = Meeting('20090915', peter, jack) s.add_all([john, peter, jack, m1, m2]) s.commit() db_engine.echo = True #We now want to print the names and positions of everyone Peter has ever met with peters_meetings = s.query(Meeting).options(eagerload ('person_to')).filter_by(person_from=peter).all() for meeting in peters_meetings: if meeting.person_to.type == 'P': print meeting.date, meeting.person_to.name, None else: #Each print statement here will emit an SQL SELECT on the employees table print meeting.date, meeting.person_to.name, meeting.person_to.position finally: db_engine.echo = False s.close() metadata.drop_all(db_engine) On Sep 11, 7:52 pm, "Michael Bayer" wrote: > have you tried using query + join() + contains_eager() ? any query you > like can be used to build the object graph of your choice along > relations(). > > > > bojanbwrote: > > > Here's something I've been struggling with recently. I'll include the > > description of steps that got me here, as I believe the context will > > make the question clearer. > > > It all started because I needed to show data (eg. in a list form) from > > two related tables (classes). However, SQLAlchemy would emit one SQL > > query for getting the objects of the first class, then one query each > > for each access to attributes of the other class. It obviously loads > > the attributes lazily, which is fine most of the time but grossly > > inefficient in this case (as there can be thousands of records in the > > first table). > > > "Aha, I'll use eagerload!" I thought. Alas, it doesn't seem to work > > for inherited classes. A mess
[sqlalchemy] How to instantiate objects of a class mapped against multiple tables?
Here's something I've been struggling with recently. I'll include the description of steps that got me here, as I believe the context will make the question clearer. It all started because I needed to show data (eg. in a list form) from two related tables (classes). However, SQLAlchemy would emit one SQL query for getting the objects of the first class, then one query each for each access to attributes of the other class. It obviously loads the attributes lazily, which is fine most of the time but grossly inefficient in this case (as there can be thousands of records in the first table). "Aha, I'll use eagerload!" I thought. Alas, it doesn't seem to work for inherited classes. A message in this group suggests that it doesn't work for self-referential inherited classes, but in this case it didn't work for plain inherited classes that don't contain references to self. I'll write a test case that shows this later. OK, I then decided to create a new class mapped against the two tables, using join() in a mapper. This worked great regarding the emitted SQL - session.query on new object generates the correct SQL even with the (deep) inheritance tree that we have. Modifying the attributes on of this object also writes them to correct respective tables on commit. Great! The new class even conceptually fits nicely with the rest of the application (e.g. I realized it's more of a business object while the two underlying classes/tables are more of an implementation detail; I'm not sure I even need the other two classes, just the tables may be enough). Fantastic! However, I can't figure how to create new instances of this (composite as I call it) class. Since it contains fields both for autogenerated primary key from the first class and foreign key from the second class, I cannot set the foreign key because I don't know the primary key before I commit, and commit fails because the foreign key is still null. Am I just missing something or am I attempting a no-no? I would think that since I've defined the attributes to be identical in the "oncluase", SQLAlchemy would now that the two are dependent and would not complain about the null value. Of course, I can always create the two other objects, but being able to do it this way fits much more nicely with the rest of the application. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to define 'count' as an SQL mapped attribute for many-to-many relationships?
There was an error in my posted code, I meant: 'book_count': column_property(select([func.count(books.c.id)], book_authors.c.author_id==authors.c.id).label('books_count')) but that's irrelevant. Using and_(), the *correct* definition is: 'book_count': column_property(select([func.count(books.c.id)], and_ (book_authors.c.author_id==authors.c.id, book_authors.c.book_id==books.c.id)).label('books_count')) Thanks for the help. By the way, I suggest adding this example to the documentation; I know it would have saved me a lot of head-banging. I suggest the following text tacked at the end of "SQL Expressions as Mapped Attributes" in "Mapper Configuration": For many-to-many relationships, use and_() to join the fields of the association table to both tables in a relation: mapper(Author, authors, properties={ 'books': relation(Book, secondary=book_authors, backref='authors'), 'book_count': column_property(select([func.count(books.c.id)], and_ (book_authors.c.author_id==authors.c.id, book_authors.c.book_id==books.c.id)).label('books_count')) }) On Jul 22, 4:10 pm, "Michael Bayer" wrote: > bojanb wrote: > > mapper(Book, books) > > mapper(Author, authors, properties={ > > 'books': relation(Book, secondary=book_authors, > > backref='authors'), > > 'book_count': column_property(select([func.count(books.c.id)], > > book_authors.c.author_id==books.c.id).label('books_count')) > > }) > > the subquery within the column property is not joined to the parent table > "authors". It needs to join book_authors back to the authors table using > AND. the subquery will automatically correlate outwards to the "authors" > query so "authors" would only render in the outermost FROM (which is what > you want). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How to define 'count' as an SQL mapped attribute for many-to-many relationships?
I'm not sure if this is a bug or I am just setting up the attribute wrong. Anyways, I'm having a problem defining count-type SQL mapped attribute in a situation with many-to-many relationships. The example here uses the classic book-author many-to-many relation. An author can have many books, and a book can be by more than one author. Again, the code is a self-contained test case. from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey from sqlalchemy.orm import mapper, relation, backref, column_property, sessionmaker from sqlalchemy.sql import select, func DB_URI='postgres://postg...@localhost/postgres' #Replace this accordingly db_engine=create_engine(DB_URI, echo=True) metadata = MetaData() books=Table('books', metadata, Column('id', Integer, primary_key=True), Column('title', String(100), nullable=False) ) class Book(object): def __init__(self, title): self.title=title authors=Table('authors', metadata, Column('id', Integer, primary_key=True), Column('name', String(100)) ) class Author(object): def __init__(self, name): self.name=name book_authors=Table('book_authors', metadata, Column('book_id', Integer, ForeignKey('books.id'), primary_key=True), Column('author_id', Integer, ForeignKey('authors.id'), primary_key=True) ) mapper(Book, books) mapper(Author, authors, properties={ 'books': relation(Book, secondary=book_authors, backref='authors'), 'book_count': column_property(select([func.count(books.c.id)], book_authors.c.author_id==books.c.id).label('books_count')) }) if __name__ == '__main__': metadata.create_all(db_engine) s=sessionmaker(bind=db_engine)() b1=Book('I, Robot') b2=Book('Foundation') b3=Book('Rendezvous with Rama') a1=Author('Isaac Asimov') a2=Author('Arthur C. Clarke') a1.books=[b1, b2] a2.books=[b3] s.add_all([b1, b2, b3, a1, a2]) s.commit() try: #Isaac Asimov has two books assert len(a1.books)==2 #But reading book_count would return the total number of books in books table (in this case 3) assert a1.book_count==2 #Fails finally: s.close() metadata.drop_all(db_engine) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] A bug in SQL expression mapped attributes for inherited objects
Hello, It took me a couple of days to narrow this down. It appears that when object inherits from another object via joined table inheritance (I haven't tested the other two inheritance modes), mapped attributes defined as SQL expressions don't load correctly. In the example below I have a Party object, which can be a Person or a Company. The Person object has a mapped attribute "full_name", which is simply first and last name concatenated with the space (similar to the example in SQLALchemy documentation). Querying on Party and then accessing this property returns None. Strangely, if we queried on Person during the same session, subsequent query on Party would work. I've attached a fully self-contained test case below; just replace DB_URI with the correct value for you and the bug can be reproduced by executing the code. My setup: Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32 SQLAlchemy 0.5.5 from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey from sqlalchemy.orm import mapper, relation, backref, column_property, sessionmaker from sqlalchemy.sql import select, func DB_URI='postgres://postg...@localhost/postgres' #Replace this accordingly db_engine=create_engine(DB_URI, echo=True) metadata = MetaData() parties=Table('parties', metadata, Column('id', Integer, primary_key=True), Column('type', String(1), nullable=False) #This field needed for joined table inheritance ) class Party(object): pass persons=Table('persons', metadata, Column('id', Integer, ForeignKey('parties.id'), primary_key=True), #Foreign key needed for joined table inheritance Column('first_name', String(50)), Column('last_name', String(50)) ) class Person(Party): def __init__(self, first_name, last_name): self.first_name=first_name self.last_name=last_name companies=Table('companies', metadata, Column('id', Integer, ForeignKey('parties.id'), primary_key=True), Column('name', String(50)), Column('tax_id', String(10)) ) class Company(Party): def __init__(self, name, tax_id): self.name=name self.tax_id=tax_id mapper(Party, parties, polymorphic_on=parties.c.type, polymorphic_identity='T') mapper(Person, persons, inherits=Party, polymorphic_identity='P', properties={ 'full_name': column_property((persons.c.first_name + ' ' + persons.c.last_name).label('full_name')) }) mapper(Company, companies, inherits=Party, polymorphic_identity='C') if __name__ == '__main__': metadata.create_all(db_engine) s=sessionmaker(bind=db_engine)() s.add_all([Person('John', 'Smith'), Company('ACME Inc.', '1234')]) s.commit() try: #Weird - if the following two lines were run before the subsequent query, final assert wouldn't fail!! #all_persons=s.query(Person).all() #assert all_persons[0].full_name=='John Smith' all_parties=s.query(Party).all() #Altough we queried on Party, SQLAlchemy correctly recognizes that second object is a copmany, as per specification assert all_parties[1].tax_id=='1234' #However, for SQL expression mapped atributes it returns None instead of the stored value. #SQLAlchemy documentation clearly states that this should work - see section "Controlling Which Tables are Queried". #BTW, this failure doesn't happen for objects that are not part of inheritance hierarchy. assert all_parties[0].full_name=='John Smith' finally: s.close() metadata.drop_all(db_engine) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---