[sqlalchemy] [0.4.8] broken convert_unicode behavior?
Hey, RDMS: postgresql database encoding: utf8 column type: character varying engine: sqlalchemy.create_engine( self.dsn, echo=False, encoding='utf-8', convert_unicode=True, strategy='threadlocal', **kw) in example, the username is Vladimïrovich with sqlalchemy 0.3.x (correct) user = session.query( User ).get( 1 ) user.name Vladim\u00efrovich with sqlalchemy 0.4.8 (incorrect) user = session.query( User ).get( 1 ) user.name Vladim\u00c3\u00afrovic It is a mistake on my side or a sqlalchemy problem? How can I check if sqlalchemy really try to convert strings into unicode encoding? Thanks, Tyr Jetzt komfortabel bei Arcor-Digital TV einsteigen: Mehr Happy Ends, mehr Herzschmerz, mehr Fernsehen! Erleben Sie 50 digitale TV Programme und optional 60 Pay TV Sender, einen elektronischen Programmführer mit Movie Star Bewertungen von TV Movie. Außerdem, aktuelle Filmhits und spannende Dokus in der Arcor-Videothek. Infos unter www.arcor.de/tv --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] foreign key problem when using reflection and schemas
Hi there, I have a problem with foreign keys that seems to occur when I combine reflection and explicit schemas, in the context of MySQL. I've confirmed this problem with both rc2 and the trunk. It's best demonstrated with some failing code: Imagine the following MySQL database 'somedb': CREATE TABLE somedb.a ( id int PRIMARY KEY auto_increment NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE somedb.b ( id int PRIMARY KEY auto_increment NOT NULL, a_id int NOT NULL, FOREIGN KEY (a_id) REFERENCES somedb.a(id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; And the following code: from sqlalchemy import * from sqlalchemy.orm import mapper, relation, sessionmaker engine = create_engine('mysql:///somedb') meta = MetaData() meta.bind = engine a_table = Table( 'a', meta, schema='somedb', autoload=True) b_table = Table( 'b', meta, schema='somedb', autoload=True) class A(object): pass class B(object): pass mapper(A, a_table, properties={'bs': relation(B)}) mapper(B, b_table) Session = sessionmaker(bind=engine) session = Session() print session.query(A).all() When executing this code, the last line fails with the following error: Traceback (most recent call last): File bin/devpython, line 138, in ? execfile(sys.argv[0]) File experiment.py, line 33, in ? print session.query(A).all() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/session.py, line 914, in query return self._query_cls(entities, self, **kwargs) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 95, in __init__ self.__setup_aliasizers(self._entities) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 109, in __setup_aliasizers mapper, selectable, is_aliased_class = _entity_info(entity) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 454, in _entity_info mapper = class_mapper(entity, compile) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 531, in class_mapper mapper = mapper.compile() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 371, in compile mapper.__initialize_properties() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 393, in __initialize_properties prop.init(key, self) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/interfaces.py, line 384, in init self.do_init() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 531, in do_init self._determine_joins() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 604, in _determine_joins raise sa_exc.ArgumentError(Could not determine join condition between sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation A.bs. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. This code *only* fails if I designate an explicit 'schema' in the table statements. If I leave these out, things work as expected. Since I'm interested in working with reflected tables that reside in multiple schemas, this is a problem. Digging around indicates this that _search_for_join, defined in _determine_joins, does not actually find the join clause. Going deeper traces the failure down to the Join class in sqlalchemy.sql.expression, which fails in self._match_primaries in its __init__ method. This in turn brings us to sqlalchemy.sql.util.join_condition, which has fk.get_referent() return None if schemas are explicitly specified, and work fine if not. fk.get_referent() uses corresponding_column, and this in turn tries to use contains_column() which returns False in the schema case, but true if 'schema' is not explicitly verified. Why I don't know. The repr of the column passed into contains_column looks the same as the repr of the column in the table, but apparently it's not exactly the same instance. Something somewhere is making the column to be different. Is this a bug? If so, how would we go around solving it? Regards, Martijn --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Aw: [0.4.8] SOLVED broken convert_unicode behavior?
sorry, its my misstake. the code calls a stored procedure and the result is not converted to unicode. simple mapping still works. - Original Nachricht Von: [EMAIL PROTECTED] An: sqlalchemy@googlegroups.com Datum: 27.10.2008 11:10 Betreff: [sqlalchemy] [0.4.8] broken convert_unicode behavior? Hey, RDMS: postgresql database encoding: utf8 column type: character varying engine: sqlalchemy.create_engine( self.dsn, echo=False, encoding='utf-8', convert_unicode=True, strategy='threadlocal', **kw) in example, the username is Vladimïrovich with sqlalchemy 0.3.x (correct) user = session.query( User ).get( 1 ) user.name Vladim\u00efrovich with sqlalchemy 0.4.8 (incorrect) user = session.query( User ).get( 1 ) user.name Vladim\u00c3\u00afrovic It is a mistake on my side or a sqlalchemy problem? How can I check if sqlalchemy really try to convert strings into unicode encoding? Thanks, Tyr Jetzt komfortabel bei Arcor-Digital TV einsteigen: Mehr Happy Ends, mehr Herzschmerz, mehr Fernsehen! Erleben Sie 50 digitale TV Programme und optional 60 Pay TV Sender, einen elektronischen Programmführer mit Movie Star Bewertungen von TV Movie. Außerdem, aktuelle Filmhits und spannende Dokus in der Arcor-Videothek. Infos unter www.arcor.de/tv Jetzt komfortabel bei Arcor-Digital TV einsteigen: Mehr Happy Ends, mehr Herzschmerz, mehr Fernsehen! Erleben Sie 50 digitale TV Programme und optional 60 Pay TV Sender, einen elektronischen Programmführer mit Movie Star Bewertungen von TV Movie. Außerdem, aktuelle Filmhits und spannende Dokus in der Arcor-Videothek. Infos unter www.arcor.de/tv --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: foreign_keys question
Thanks. I tracked the problem in the SQL script I was using to create the new tables. The new script accidentally deleted the previous foreign key. It is fixed now and it works correctly. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Composite columns with declarative base
Hello all New to SQLAlchemy and ORM and loving it. I am trying to use the declarative base extension with composite column. I have two very simple tables - user and address. My code at: http://python.pastebin.com/m6e032164 works without any problem. I am trying to put the same thing using declarative base: http://python.pastebin.com/m1a05e5c0 and it throws me the error. Any ideas? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] in_ and related objects
I'm having trouble using in_ to select a bunch of related objects - here is what I want: Quick overview of my model (shortened for simplicity): #Autoloaded table defs go here and aren't really important class Person(object): pass class EMailAddress(object): pass orm.mapper(Person, person_table, properties = { 'name': orm.column_property(person_table.c.first_name + + person_table.c.last_name) }) orm.mapper(EMailAddress, email_table, properties = { 'person': orm.relation(Person, lazy=False, backref=backref('email_addresses', lazy=False)) }) I want to be able to do something like this: people = [person1, person2, person3] addresses = Session.query(EMailAddress).filter(Person.in_(people)) When I try it, I get an AttributeError: type object 'Person' has no attribute 'in_' I'm running 0.5.0rc2 Is this possible? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: in_ and related objects
On Oct 27, 2008, at 10:43 AM, Adam wrote: I'm having trouble using in_ to select a bunch of related objects - here is what I want: Quick overview of my model (shortened for simplicity): #Autoloaded table defs go here and aren't really important class Person(object): pass class EMailAddress(object): pass orm.mapper(Person, person_table, properties = { 'name': orm.column_property(person_table.c.first_name + + person_table.c.last_name) }) orm.mapper(EMailAddress, email_table, properties = { 'person': orm.relation(Person, lazy=False, backref=backref('email_addresses', lazy=False)) }) I want to be able to do something like this: people = [person1, person2, person3] addresses = Session.query(EMailAddress).filter(Person.in_(people)) When I try it, I get an AttributeError: type object 'Person' has no attribute 'in_' I'm running 0.5.0rc2 Is this possible? The in_() operator is not currently implemented for many-to-one- relation, its a TODO. When it is implemented, it would be EmailAddress.person.in_(people). so currently two ways to do it: Session.query(EmailAddress).filter(or_(*[EmailAddress.person==p for p in people])) Session.query(EmailAddress).filter(EmailAddress.person_id.in_([p.id for p in people])) note that or_() takes *args, in_() takes a list. Believe it or not there's a rationale for this. Also I'd recommend a plain python descriptor for Person.name since it would reduce SQL overhead a bit: @property def name(self): return self.first_name + + self.last_name --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Composite columns with declarative base
theres a bug in that the error message is misleading, but in fact a composite property owns the columns within it which cannot be mapped separately, so to make that work you'd need to say: class User(Base): __tablename__ = 'user' house_address_id = Column('house_address', Integer, ForeignKey('address.id')) office_address_id = Column('office_address', Integer, ForeignKey('address.id')) house_address = relation(Address, primaryjoin=house_address_id==Address.id) office_address = relation(Address, primaryjoin=office_address_id==Address.id) comp = composite(Comp, Column('id', Integer, primary_key=True, autoincrement=True), Column('name', CHAR)) but the way you're using Comp isn't going to work in any case; you're actually looking for comparable_property() here: class MyComparator(sqlalchemy.orm.interfaces.PropComparator): def __eq__(self, other): return self.comp == other.comp class User(Base): __tablename__ = 'user' id = Column('id', Integer, primary_key=True, autoincrement=True) name = Column('name', CHAR) house_address_id = Column('house_address', Integer, ForeignKey('address.id')) office_address_id = Column('office_address', Integer, ForeignKey('address.id')) house_address = relation(Address, primaryjoin=house_address_id==Address.id) office_address = relation(Address, primaryjoin=office_address_id==Address.id) @property def comp(self): return self.id + self.name comp = comparable_property(MyComparator) On Oct 27, 2008, at 9:22 AM, riteshn wrote: Hello all New to SQLAlchemy and ORM and loving it. I am trying to use the declarative base extension with composite column. I have two very simple tables - user and address. My code at: http://python.pastebin.com/m6e032164 works without any problem. I am trying to put the same thing using declarative base: http://python.pastebin.com/m1a05e5c0 and it throws me the error. Any ideas? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Querying problem regarding joined table inheritance
Hi, i have a problem regarding inheritance. I use joined table inheritance. Assume the classes parent, child1 and child2. The parent_table has a field 'type'. I have to use 'session.query(child1).from_statement(deliver_me_parents_with special_conditions).all()' - the result contains child1 and child2 objects. The documentation says sth. about bypassing all other stuff, but my description was just a model for my problem, i can not do anything like type=child1 in the statment, cause there is a deeper inheritance structure. Summary: I have a result set of 'parents' but i want to get only the child1 objects. The result of 'from_statement' contains the type- column, so SQLAlchemy should be able to resolve the child1 objects? Best regards, Matthias. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: in_ and related objects
Thank you very much. I had implemented name as a plain python property, but needed to be able to perform a like query over a person's whole name (autocomplete text box), so I defined it there for ease of querying. I could have generated the column only when I needed to query over it, but this seemed cleaner to me. On Oct 27, 11:29 am, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 27, 2008, at 10:43 AM, Adam wrote: I'm having trouble using in_ to select a bunch of related objects - here is what I want: Quick overview of my model (shortened for simplicity): #Autoloaded table defs go here and aren't really important class Person(object): pass class EMailAddress(object): pass orm.mapper(Person, person_table, properties = { 'name': orm.column_property(person_table.c.first_name + + person_table.c.last_name) }) orm.mapper(EMailAddress, email_table, properties = { 'person': orm.relation(Person, lazy=False, backref=backref('email_addresses', lazy=False)) }) I want to be able to do something like this: people = [person1, person2, person3] addresses = Session.query(EMailAddress).filter(Person.in_(people)) When I try it, I get an AttributeError: type object 'Person' has no attribute 'in_' I'm running 0.5.0rc2 Is this possible? The in_() operator is not currently implemented for many-to-one- relation, its a TODO. When it is implemented, it would be EmailAddress.person.in_(people). so currently two ways to do it: Session.query(EmailAddress).filter(or_(*[EmailAddress.person==p for p in people])) Session.query(EmailAddress).filter(EmailAddress.person_id.in_([p.id for p in people])) note that or_() takes *args, in_() takes a list. Believe it or not there's a rationale for this. Also I'd recommend a plain python descriptor for Person.name since it would reduce SQL overhead a bit: @property def name(self): return self.first_name + + self.last_name --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Querying problem regarding joined table inheritance
On Oct 27, 2008, at 12:19 PM, Matthias wrote: Hi, i have a problem regarding inheritance. I use joined table inheritance. Assume the classes parent, child1 and child2. The parent_table has a field 'type'. I have to use 'session.query(child1).from_statement(deliver_me_parents_with special_conditions).all()' - the result contains child1 and child2 objects. The documentation says sth. about bypassing all other stuff, but my description was just a model for my problem, i can not do anything like type=child1 in the statment, cause there is a deeper inheritance structure. Summary: I have a result set of 'parents' but i want to get only the child1 objects. The result of 'from_statement' contains the type- column, so SQLAlchemy should be able to resolve the child1 objects? it should. is child2 a subclass of child1 ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] COMMENT ON clause for schema.Column/schema.Table?
Hi, i looked at the API doc but found no obvious way to specify a COMMENT clause for a schema object. Is there any support for this in SA or anything planned? Basically you can add comments to schema objects in e.g. Oracle, via a COMMENT command. (for Oracle http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_4009.htm#i2119719, SQL Server supports something similar, and Postgresql too http://www.postgresql.org/docs/8.3/interactive/sql-comment.html) Would be nice to be able to reflect a docstring into the database schema that way. Michael -- Michael Schlenker Software Engineer CONTACT Software GmbH Tel.: +49 (421) 20153-80 Wiener Straße 1-3 Fax:+49 (421) 20153-41 28359 Bremen http://www.contact.de/ E-Mail: [EMAIL PROTECTED] Sitz der Gesellschaft: Bremen Geschäftsführer: Karl Heinz Zachries, Ralf Holtgrefe Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: child counting + delete
On Oct 27, 2008, at 4:23 PM, GustaV wrote: Ok it looks good... I assume it also means I can't rely on ON DELETE CASCADE anymore, and I then must use passive_deletes=False. At least on those relations. I dont see why that's the case here. It depends on the specific scenario, but in the example I gave, the parent row is deleted and all the child rows - the UPDATE statement is moot. You could modfiy the OnFlushExt to call a different hook for deleted objects, though. In a general point of view, using MySQL InnoDB tables, is ON DELETE CASCADE more efficient than SA stuff? much more efficient and you should keep using it. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: column name capitalization in mysql
Thanks Michael, I posted a new ticket at: http://www.sqlalchemy.org/trac/ticket/1206 Best Regards, -Tom H On Oct 27, 4:13 pm, Empty [EMAIL PROTECTED] wrote: Tom, On Mon, Oct 27, 2008 at 4:03 PM, Tom H [EMAIL PROTECTED] wrote: I'm experiencing the following problem with SA 0.4.7 The table has the following columns: id - int primary key DOB - varchar(25) ... extra columns not needed for report It's a table from legacy app that I'm accessing from SA to generate reports, etc. I'm having the table definition auto load (reflected from the database) like this: myTable = Table('sample_table', metadata, autoload=True, include_columns=['id', 'DOB']) When I later retreive data, the DOB column is not reflected. If I change 'DOB' to 'dob' for the include_columns list, the column is reflected as 'DOB' ... strange that the reflected name is capitalized, but when I specify all caps for include_columns the field is not reflected. Any recommendations or suggestions for dealing with this issue? The problem is related to this bit of code.: if only and name.lower() not in only: self.logger.info(Omitting reflected column %s.%s % (table.name, name)) return If you would please submit a ticket we can get it fixed. Thanks, Michael --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: child counting + delete
Ok! My mistake! I completely made it now. Thanks again. On 27 oct, 22:08, Michael Bayer [EMAIL PROTECTED] wrote: Just as a note, if you're using ON DELETE CASCADE with passive_deletes=True, SQLA will still explicitly delete whatever objects might already be loaded into collections, so that the Session stays in sync with what's currently in it. the CASCADE then occurs after this when SQLA deletes the parent row. On Oct 27, 2008, at 4:57 PM, GustaV wrote: The reason I ask that is that it is not really safe to use both DB cascades and SA delete management. The time comes quite fast when you don't know if an object is deleted by SA or by the DB. So you don't know if you got into after_flush or not in any case. So : is there a true speed penalty to do it completely on the SA side? On 27 oct, 21:23, GustaV [EMAIL PROTECTED] wrote: Ok it looks good... I assume it also means I can't rely on ON DELETE CASCADE anymore, and I then must use passive_deletes=False. At least on those relations. In a general point of view, using MySQL InnoDB tables, is ON DELETE CASCADE more efficient than SA stuff? Thanks a lot for your help On 23 oct, 04:00, Michael Bayer [EMAIL PROTECTED] wrote: I do this kind of thing...theres lots of variants on how to do this but the idea of __after_flush__ is that everything, including collections, have been committed (but the new, dirty and deleted lists have not been reset yet): class OnFlushExt(orm.SessionExtension): def after_flush(self, session, flush_context): for obj in list(session.new) + list(session.dirty) + list(session.deleted): if hasattr(obj, '__after_flush__'): obj.__after_flush__(session.connection()) class MyObject(object): def __after_flush__(self, connection): connection.execute( mytable.update().where(mytable.c.id==self.id).\ values(object_count= select ([func .count (child_table.c.id)]).where(child_table.c.parent_id==mytable.c.id) ) ) you could also batch all the updates into a single UPDATE statement within after_flush(), that would be a lot more efficient (i.e. where mytable.c.id.in_([all the ids of the changed parent objects]) ). On Oct 22, 2008, at 5:41 PM, GustaV wrote: Ok... I'm not sure to understand the way you do it... Does it mean I should check the add and delete of regions in the transaction in after_flush() and issue sql directly (not using the ORM) to update the count in Country? On 22 oct, 20:46, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 22, 2008, at 2:05 PM, GustaV wrote: Hi all! In a 1-N relation between a country and its regions, I'm using an attribute extension to update the current count of regions into countries. It works very well when I append or remove regions from country. But if I delete one of the region directly (session.delete(region)), the country doesn't know it has lost one... I tried to use a MapperExtension.before_delete to manually remove the said region from the country, but it is not marked as dirty and then not updated... Any way to do it properly? for a one-to-many relation, its often easy enough to just have a cascade rule from country-region such that region is deleted automatically when removed from the parent. This is the typical way to go about deletions from relations, since session.delete() does not cascade backwards to all owning collections. although when I deal with columns that count something that is elsewhere represented in the database, I often issue these using SQL within a SessionExtension.after_flush(). This removes the need to worry about catching attribute events and just directly sets the correct value based on the state of the transaction post-flush. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: child counting + delete
Just as a note, if you're using ON DELETE CASCADE with passive_deletes=True, SQLA will still explicitly delete whatever objects might already be loaded into collections, so that the Session stays in sync with what's currently in it. the CASCADE then occurs after this when SQLA deletes the parent row. On Oct 27, 2008, at 4:57 PM, GustaV wrote: The reason I ask that is that it is not really safe to use both DB cascades and SA delete management. The time comes quite fast when you don't know if an object is deleted by SA or by the DB. So you don't know if you got into after_flush or not in any case. So : is there a true speed penalty to do it completely on the SA side? On 27 oct, 21:23, GustaV [EMAIL PROTECTED] wrote: Ok it looks good... I assume it also means I can't rely on ON DELETE CASCADE anymore, and I then must use passive_deletes=False. At least on those relations. In a general point of view, using MySQL InnoDB tables, is ON DELETE CASCADE more efficient than SA stuff? Thanks a lot for your help On 23 oct, 04:00, Michael Bayer [EMAIL PROTECTED] wrote: I do this kind of thing...theres lots of variants on how to do this but the idea of __after_flush__ is that everything, including collections, have been committed (but the new, dirty and deleted lists have not been reset yet): class OnFlushExt(orm.SessionExtension): def after_flush(self, session, flush_context): for obj in list(session.new) + list(session.dirty) + list(session.deleted): if hasattr(obj, '__after_flush__'): obj.__after_flush__(session.connection()) class MyObject(object): def __after_flush__(self, connection): connection.execute( mytable.update().where(mytable.c.id==self.id).\ values(object_count= select ([func .count (child_table.c.id)]).where(child_table.c.parent_id==mytable.c.id) ) ) you could also batch all the updates into a single UPDATE statement within after_flush(), that would be a lot more efficient (i.e. where mytable.c.id.in_([all the ids of the changed parent objects]) ). On Oct 22, 2008, at 5:41 PM, GustaV wrote: Ok... I'm not sure to understand the way you do it... Does it mean I should check the add and delete of regions in the transaction in after_flush() and issue sql directly (not using the ORM) to update the count in Country? On 22 oct, 20:46, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 22, 2008, at 2:05 PM, GustaV wrote: Hi all! In a 1-N relation between a country and its regions, I'm using an attribute extension to update the current count of regions into countries. It works very well when I append or remove regions from country. But if I delete one of the region directly (session.delete(region)), the country doesn't know it has lost one... I tried to use a MapperExtension.before_delete to manually remove the said region from the country, but it is not marked as dirty and then not updated... Any way to do it properly? for a one-to-many relation, its often easy enough to just have a cascade rule from country-region such that region is deleted automatically when removed from the parent. This is the typical way to go about deletions from relations, since session.delete() does not cascade backwards to all owning collections. although when I deal with columns that count something that is elsewhere represented in the database, I often issue these using SQL within a SessionExtension.after_flush(). This removes the need to worry about catching attribute events and just directly sets the correct value based on the state of the transaction post-flush. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: foreign key problem when using reflection and schemas
That should be working now in r5203. The reflection code was missing an edge case where an explicit schema= is the same as the connection's schema. Switching those to schema=None should work as intended if you need a workaround on a released version. Cheers, Jason Martijn Faassen wrote: Hi there, I have a problem with foreign keys that seems to occur when I combine reflection and explicit schemas, in the context of MySQL. I've confirmed this problem with both rc2 and the trunk. It's best demonstrated with some failing code: Imagine the following MySQL database 'somedb': CREATE TABLE somedb.a ( id int PRIMARY KEY auto_increment NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE somedb.b ( id int PRIMARY KEY auto_increment NOT NULL, a_id int NOT NULL, FOREIGN KEY (a_id) REFERENCES somedb.a(id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; And the following code: from sqlalchemy import * from sqlalchemy.orm import mapper, relation, sessionmaker engine = create_engine('mysql:///somedb') meta = MetaData() meta.bind = engine a_table = Table( 'a', meta, schema='somedb', autoload=True) b_table = Table( 'b', meta, schema='somedb', autoload=True) class A(object): pass class B(object): pass mapper(A, a_table, properties={'bs': relation(B)}) mapper(B, b_table) Session = sessionmaker(bind=engine) session = Session() print session.query(A).all() When executing this code, the last line fails with the following error: Traceback (most recent call last): File bin/devpython, line 138, in ? execfile(sys.argv[0]) File experiment.py, line 33, in ? print session.query(A).all() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/session.py, line 914, in query return self._query_cls(entities, self, **kwargs) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 95, in __init__ self.__setup_aliasizers(self._entities) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 109, in __setup_aliasizers mapper, selectable, is_aliased_class = _entity_info(entity) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 454, in _entity_info mapper = class_mapper(entity, compile) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 531, in class_mapper mapper = mapper.compile() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 371, in compile mapper.__initialize_properties() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 393, in __initialize_properties prop.init(key, self) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/interfaces.py, line 384, in init self.do_init() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 531, in do_init self._determine_joins() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 604, in _determine_joins raise sa_exc.ArgumentError(Could not determine join condition between sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation A.bs. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. This code *only* fails if I designate an explicit 'schema' in the table statements. If I leave these out, things work as expected. Since I'm interested in working with reflected tables that reside in multiple schemas, this is a problem. Digging around indicates this that _search_for_join, defined in _determine_joins, does not actually find the join clause. Going deeper traces the failure down to the Join class in sqlalchemy.sql.expression, which fails in self._match_primaries in its __init__ method. This in turn brings us to sqlalchemy.sql.util.join_condition, which has fk.get_referent() return None if schemas are explicitly specified, and work fine if not. fk.get_referent() uses corresponding_column, and this in turn tries to use contains_column() which returns False in the schema case, but true if 'schema' is not explicitly verified. Why I don't know. The repr of the column passed into contains_column looks the same as the repr of the column in the table, but apparently it's not exactly the same instance. Something somewhere is making the column to be different. Is this a bug? If so, how would we go around solving it? Regards, Martijn --~--~-~--~~~---~--~~ You received this message because you
[sqlalchemy] InvalidRequestError and ConcurrentModification problems
Hi all, I'm using Python 2.5.1 with SqlAlchemy 0.5rc2 with Sqlite on Windows Server 2003 and I'm having a problem with my application throwing InvalidRequestError and ConcurrentModification exceptions. Here is my simplified declarative class: class Job(srsmanagerdb.Base): STATUS_INIT = 0 STATUS_RUN = 1 STATUS_DONE = 2 STATUS_FAIL = 3 __tablename__ = jobs id= Column(Integer, primary_key=True, autoincrement=True) nas = Column(String(12), default=None) filename = Column(String(64), default=None, index=True) filesize = Column(Integer, default=None) created = Column(DateTime, default=None) job_id= Column(String(32), default=None) productType = Column(String(1), default=None) contentType = Column(String(10), default=None) priorityType = Column(String(10), default=None) priority = Column(Integer, default=None) assignedPress = Column(Integer, default=None) status= Column(Integer, default=None) def __init__(self, objrefs, fileDetails): nas, filename, filesize, ctime = fileDetails self.nas = nas self.filename = filename self.filesize = filesize self.created = datetime.datetime(*time.strptime(ctime[:ctime.find(.)], %Y-%m-%d %H:%M:%S)[0:6]) This object is used to track state information about jobs being handled by a looping state machine. I keep a list of all active jobs in a Jobs collection class, so there are many active intances of the above class. The simplified Jobs collection class looks like this: class Jobs(AppContext): def __init__(self, objrefs): self._logger = __logger__ self._jobs = [] self._markedForRemoval = [] def markForRemoval(self, job): self._markedForRemoval.append(job) def removeMarkedJobs(self): # throws exception in here session = srsmanagerdb.Session() for markedJob in self._markedForRemoval: try: session.expire_all() session.delete(markedJob) session.commit() self._jobs.remove(markedJob) except sqlalchemy.exceptions.ConcurrentModificationError, e: self._logger.warn(%s threw exception %s % (job.filename, e)) self._markedForRemoval = [] def process(self): for job for self._jobs: job.process() if job.status == Job.STATUS_DONE: self.markForRemoval(job) self.removeMarkedJobs() The above simplified code runs for awhile (10s of minutes) with hundreds of jobs and then it throws the exception below in the removeMarkedJobs() method. I've worked really hard trying to figure out what's going wrong here. This is the only place where I delete jobs and commit that delete to the database. One question I have is if it's a good idea to keep a list of active Job instances (database rows) in a Python list. In my removeMarkedJobs() I'm deleting the job instances, and then removing the job instance from the list. Is this necessary or good practice? I haven't figured out if just deleting the job instance from the list (self._jobs.remove(markedJob)) will also delete the job from the database or not. Anyway, here's the traceback of the exception I'm getting. Any help would be appreciated. Thanks, Doug 2008-10-27 18:15:54 srsmanager ERRORunexpected error, restarting: Traceback (most recent call last): File c:\cygwin\home\ripadmin\dev\srsmanager\srsprocess.py, line 154, in runjobs isActive = self._jobs.process() File c:\cygwin\home\ripadmin\dev\srsmanager\jobs.py, line 436, in process self.removeMarkedJobs() File c:\cygwin\home\ripadmin\dev\srsmanager\jobs.py, line 397, in removeMarkedJobs self._logger.warn(%s threw exception %s % (markedJob.filename, e)) File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\attributes.py, line 135, in __get__ return self.impl.get(instance_state(instance)) File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\attributes.py, line 327, in get value = callable_() File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\attributes.py, line 909, in __call__ attr.impl.key in unmodified File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\mapper.py, line 1715, in _load_scalar_attributes result = session.query(mapper)._get(identity_key, refresh_state=state, only_load_props=attribute_names) File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\query.py, line 1211, in _get return q.all()[0] File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\query.py, line 985, in all return list(self) File c:\python\2.5\lib\site-packages\SQLAlchemy-0.5.0rc1-py2.5.egg\sqlalchem y\orm\query.py, line 1073, in __iter__ return
[sqlalchemy] Re: Querying problem regarding joined table inheritance
Ok, the real world structure was a little bit more comnplicated, but it worked out this way. There is only one problem left, regarding the the name of the FK column which is responsible for resolving the inheritance. For constructing my join condition, i need to know the name of that column. Sometimes it is id, sometimes it is for example foo_id. I searched with print dir(class_mapper(MyClass).mapped_table) print dir(class_mapper(MyClass).local_table) and so on... I did not find a way to resolve the name of the column which is responsible for inheritance. On 27 Okt., 22:04, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 27, 2008, at 4:57 PM, Matthias wrote: At the moment i use a workaround which does exactly that what you mentioned. It is a 'wanted class' query string discriminator. With the example this would work properly. The only thing i do not know is how to get the 'polymorphic_identity' which is defined in the mapper. How can i retrieve this String when only the class is given? you could say class_mapper(MyClass).polymorphic_identity. But there is still my bigger problem, of course we have to modify the scenario: class Parent(object): class Child(Parent): class SubChild1(Child): class SubChild2(Child): Now i want to get all Child objects, this means all Child, Subchild1 and Subchild2 objects. result = session.query(Child).from_statement(stmt).all() The parent class of Subchild1 and Subchild2 - Child does not know anything about any subclasses. So if i only have the Child class i can not imagine a smart way to solve this, because in plain SQL i need all possible type values. The nice thing about SQLAlchemy was the automatic inheritance resolver. the Query() object, if you were not using from_statement(), would generate SQL that joins from the parent to child table and returns all rows. This returns for you all Child, SubChild1, and SubChild2 objects. parent rows which are not child rows wont be returned since they do not join to child. The remaining columns from subchild1 and subchild2 are loaded in separate SELECT statements. If OTOH you were using single table inheritance, the Query would in fact produce a clause like parent.type.in_('child', 'subchild1', 'subchild2'). But for joined table inheritance, the join you're looking for here is: select * from parent JOIN child on parent.id=child.id that way you also don't need to know the discriminator. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---